I really think you need to qualify this statement a whole lot; more than you have done in the OP. Otherwise, it sounds like you’re saying, “before you do anything else, get yourself a huge leather chair and a fluffy white cat to pet while sitting in it”. This would, admittedly, be a lot of fun, but it has little to do with rationality.
In addition, assuming I understand your real point correctly (and I’m not at all certain that I do), I do admit that it feels really persuasive. Thus, I am instantly suspicious of it, and I would really like to see some evidence. Does the act of deliberately thinking like a villain improve a person’s performance across any kind of a measurable metrics, or in fact affect it in any significant way ? I would love to see some sources.
Thus, I am instantly suspicious of it, and I would really like to see some evidence.
I don’t have evidence in the sense that I can’t point you to controlled studies. I’m working off of general principles, e.g. the general principles behind the tragedy of group selectionism, as well as extrapolating from Quirinus_Quirrell’s experience and mine.
I guess I’m also making two points that I didn’t do a good enough job of distinguishing. The first is that if your narrative identity contains more superhero aspects than supervillain aspects, you may be more likely to flinch away from what I think is the correct answer on certain kinds of problems, e.g. trolley-like problems. One way to test this would be to prime groups of people into thinking about superheroes resp. supervillains and then give them trolley problems. I don’t know if anyone’s done this.
The second point is that thinking like a supervillain can also make it easier to generate certain kinds of unsavory but potentially useful ideas. One way to test this would be to prime LWers into thinking about superheroes resp. supervillains and then make them play the AI in AI Box experiments. I am pretty sure no one’s done this.
I don’t have the resources to run the first test. The second test is not a very good test (it would be hard to control for the other factors relevant to success in the AI Box experiment).
If the way I wrote the OP or the fact that I put it in Main suggested that I had strong evidence that this is a really good idea, that was a mistake on my part.
e.g. the general principles behind the tragedy of group selectionism, as well as extrapolating from Quirinus_Quirrell’s experience and mine.
I don’t believe that the Tragedy of Group Selectionism is entirely analogous, since that scenario deals with epistemic as opposed to instrumental rationality. And, while your and Quirinus_Quirrell’s experiences are definitely interesting, that doesn’t mean that they are representative (of course, it doesn’t mean that they aren’t representative, either).
I think the experiments you propose are promising (the first one more so than the second, as you said). If you could dig up some studies on the topic, or run some yourself, the results would definitely make a good Main post (IMHO).
If the way I wrote the OP or the fact that I put it in Main suggested that I had strong evidence that this is a really good idea, that was a mistake on my part.
No no, I did not get that impression at all—that’s why I originally pointed out that, due to the lack of evidence, this article probably belongs in Discussion more than in Main.
I really think you need to qualify this statement a whole lot; more than you have done in the OP. Otherwise, it sounds like you’re saying, “before you do anything else, get yourself a huge leather chair and a fluffy white cat to pet while sitting in it”. This would, admittedly, be a lot of fun, but it has little to do with rationality.
In addition, assuming I understand your real point correctly (and I’m not at all certain that I do), I do admit that it feels really persuasive. Thus, I am instantly suspicious of it, and I would really like to see some evidence. Does the act of deliberately thinking like a villain improve a person’s performance across any kind of a measurable metrics, or in fact affect it in any significant way ? I would love to see some sources.
I don’t have evidence in the sense that I can’t point you to controlled studies. I’m working off of general principles, e.g. the general principles behind the tragedy of group selectionism, as well as extrapolating from Quirinus_Quirrell’s experience and mine.
I guess I’m also making two points that I didn’t do a good enough job of distinguishing. The first is that if your narrative identity contains more superhero aspects than supervillain aspects, you may be more likely to flinch away from what I think is the correct answer on certain kinds of problems, e.g. trolley-like problems. One way to test this would be to prime groups of people into thinking about superheroes resp. supervillains and then give them trolley problems. I don’t know if anyone’s done this.
The second point is that thinking like a supervillain can also make it easier to generate certain kinds of unsavory but potentially useful ideas. One way to test this would be to prime LWers into thinking about superheroes resp. supervillains and then make them play the AI in AI Box experiments. I am pretty sure no one’s done this.
I don’t have the resources to run the first test. The second test is not a very good test (it would be hard to control for the other factors relevant to success in the AI Box experiment).
If the way I wrote the OP or the fact that I put it in Main suggested that I had strong evidence that this is a really good idea, that was a mistake on my part.
I don’t believe that the Tragedy of Group Selectionism is entirely analogous, since that scenario deals with epistemic as opposed to instrumental rationality. And, while your and Quirinus_Quirrell’s experiences are definitely interesting, that doesn’t mean that they are representative (of course, it doesn’t mean that they aren’t representative, either).
I think the experiments you propose are promising (the first one more so than the second, as you said). If you could dig up some studies on the topic, or run some yourself, the results would definitely make a good Main post (IMHO).
No no, I did not get that impression at all—that’s why I originally pointed out that, due to the lack of evidence, this article probably belongs in Discussion more than in Main.