To be clear: I’m not arguing against. I’m asking to clarify.
I find myself thoroughly confused by this article.
How is a higher probability for meltdown NOT a “point against” the reactor—and how is less waste NOT a “point for?” I think I’m missing some underlying principle here.
If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower.
Wait. WHAT? How does that even make sense?
I suppose if you gave me a long boring lecture about reactors, and then quizzed me on it before I remembered the facts (with my house cat memory), I would could get this wrong for the exact reasons you described, without being irrational.
Suppose there’s a multiple choice question, “How much waste does reactor 1 produce?” and I know that reactor 1 is the best across most categories (has the most points in its favor), and I know that all reactors produce between 10 and 15 units of waste, then my answer would be (b) below:
(a) 8 units
(b) 10 units
(c) 12 units
(d) 14 units
And of course, there’s every possibility that “reactor 1” didn’t get the best score in waste production. Didn’t I just make the same mistake as Eliezer described, for completely logical reasons (maximum likelihood guess under uncertainty)? This isn’t a failure of my logic; it’s a failure of my memory.
In real life, if I expected a quiz like this, I would have STUDIED.
Why else would anyone expect an overall-best-ranking reactor to necessarily be the best at waste production?
Here’s another idea. Suppose that long boring hypothetical lecture were on top of that so confusing that the listener carries away a message that “a meltdown is when a reactor has produced more waste than its capacity.” Then it is a perfectly logical chain of reasoning that if a reactor produces less waste, then its probability of meltdown as lower. But this is poor communication, not poor reasoning.
I believe the way it worked out was that when they heard a particular design produced less toxic waste, they also assumed a reactor that produced less waste was less likely to melt down.
That’s +1 for less waste and +1 for less chance of meltdown.
When they are then told that this same design has a higher chance of meltdown, they subtract one point for meltdown without subtracting for less waste, even though they did the inverse earlier.
So, the audience tallies like so:
+1 (less waste) +1 (inferred for less meltdown) −1 (more meltdown) = +1
When they should have tallied like so:
+1 (less waste) −1 (more meltdown) = 0
The net ends up being +1 for the reactor, instead of 0.
This results in a good feeling for the reactor, when in reality they shouldn’t have felt positive or negative.
I’d written the above before I read this defense of researchers, before I knew to watch myself when I’m defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.
Yeah, it’s a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It’s pretty difficult to avoid, since it’s so natural, but I think the key is when you get surprised like that (and even if you don’t), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don’t go back and audit it.
I think we should also separate the subjects of the psychology behind when this might happen and whether or not we are using scales.
It may indeed be the case that people are bad accountants (although I rarely find myself assuming these implied things, and further if I find that my assumptions are wrong I adjust accordingly), but this doesn’t change the fact that we are adding +/- points (much like you’re keeping score/weighing the two alternatives).
Assuming a perfectly rational mind was approaching the proposition of reactor A vs reactor B (and we can even do reactor C...), then the way it would decide which proposition is best is by tallying the pros/cons to each proposition. Of course, in reality we are not perfectly rational and moreover different people assign different point-values to different categories. But it is still a scale.
To be clear: I’m not arguing against. I’m asking to clarify. I find myself thoroughly confused by this article.
How is a higher probability for meltdown NOT a “point against” the reactor—and how is less waste NOT a “point for?” I think I’m missing some underlying principle here.
Wait. WHAT? How does that even make sense?
I suppose if you gave me a long boring lecture about reactors, and then quizzed me on it before I remembered the facts (with my house cat memory), I would could get this wrong for the exact reasons you described, without being irrational.
Suppose there’s a multiple choice question, “How much waste does reactor 1 produce?” and I know that reactor 1 is the best across most categories (has the most points in its favor), and I know that all reactors produce between 10 and 15 units of waste, then my answer would be (b) below:
(a) 8 units
(b) 10 units
(c) 12 units
(d) 14 units
And of course, there’s every possibility that “reactor 1” didn’t get the best score in waste production. Didn’t I just make the same mistake as Eliezer described, for completely logical reasons (maximum likelihood guess under uncertainty)? This isn’t a failure of my logic; it’s a failure of my memory.
In real life, if I expected a quiz like this, I would have STUDIED.
Why else would anyone expect an overall-best-ranking reactor to necessarily be the best at waste production?
Here’s another idea. Suppose that long boring hypothetical lecture were on top of that so confusing that the listener carries away a message that “a meltdown is when a reactor has produced more waste than its capacity.” Then it is a perfectly logical chain of reasoning that if a reactor produces less waste, then its probability of meltdown as lower. But this is poor communication, not poor reasoning.
I believe the way it worked out was that when they heard a particular design produced less toxic waste, they also assumed a reactor that produced less waste was less likely to melt down.
That’s +1 for less waste and +1 for less chance of meltdown.
When they are then told that this same design has a higher chance of meltdown, they subtract one point for meltdown without subtracting for less waste, even though they did the inverse earlier.
So, the audience tallies like so: +1 (less waste) +1 (inferred for less meltdown) −1 (more meltdown) = +1
When they should have tallied like so: +1 (less waste) −1 (more meltdown) = 0
The net ends up being +1 for the reactor, instead of 0.
This results in a good feeling for the reactor, when in reality they shouldn’t have felt positive or negative.
You’re right, of course.
I’d written the above before I read this defense of researchers, before I knew to watch myself when I’m defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.
Yeah, it’s a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It’s pretty difficult to avoid, since it’s so natural, but I think the key is when you get surprised like that (and even if you don’t), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don’t go back and audit it.
I think we should also separate the subjects of the psychology behind when this might happen and whether or not we are using scales.
It may indeed be the case that people are bad accountants (although I rarely find myself assuming these implied things, and further if I find that my assumptions are wrong I adjust accordingly), but this doesn’t change the fact that we are adding +/- points (much like you’re keeping score/weighing the two alternatives).
Assuming a perfectly rational mind was approaching the proposition of reactor A vs reactor B (and we can even do reactor C...), then the way it would decide which proposition is best is by tallying the pros/cons to each proposition. Of course, in reality we are not perfectly rational and moreover different people assign different point-values to different categories. But it is still a scale.