I believe the way it worked out was that when they heard a particular design produced less toxic waste, they also assumed a reactor that produced less waste was less likely to melt down.
That’s +1 for less waste and +1 for less chance of meltdown.
When they are then told that this same design has a higher chance of meltdown, they subtract one point for meltdown without subtracting for less waste, even though they did the inverse earlier.
So, the audience tallies like so:
+1 (less waste) +1 (inferred for less meltdown) −1 (more meltdown) = +1
When they should have tallied like so:
+1 (less waste) −1 (more meltdown) = 0
The net ends up being +1 for the reactor, instead of 0.
This results in a good feeling for the reactor, when in reality they shouldn’t have felt positive or negative.
I’d written the above before I read this defense of researchers, before I knew to watch myself when I’m defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.
Yeah, it’s a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It’s pretty difficult to avoid, since it’s so natural, but I think the key is when you get surprised like that (and even if you don’t), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don’t go back and audit it.
I think we should also separate the subjects of the psychology behind when this might happen and whether or not we are using scales.
It may indeed be the case that people are bad accountants (although I rarely find myself assuming these implied things, and further if I find that my assumptions are wrong I adjust accordingly), but this doesn’t change the fact that we are adding +/- points (much like you’re keeping score/weighing the two alternatives).
Assuming a perfectly rational mind was approaching the proposition of reactor A vs reactor B (and we can even do reactor C...), then the way it would decide which proposition is best is by tallying the pros/cons to each proposition. Of course, in reality we are not perfectly rational and moreover different people assign different point-values to different categories. But it is still a scale.
I believe the way it worked out was that when they heard a particular design produced less toxic waste, they also assumed a reactor that produced less waste was less likely to melt down.
That’s +1 for less waste and +1 for less chance of meltdown.
When they are then told that this same design has a higher chance of meltdown, they subtract one point for meltdown without subtracting for less waste, even though they did the inverse earlier.
So, the audience tallies like so: +1 (less waste) +1 (inferred for less meltdown) −1 (more meltdown) = +1
When they should have tallied like so: +1 (less waste) −1 (more meltdown) = 0
The net ends up being +1 for the reactor, instead of 0.
This results in a good feeling for the reactor, when in reality they shouldn’t have felt positive or negative.
You’re right, of course.
I’d written the above before I read this defense of researchers, before I knew to watch myself when I’m defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.
Yeah, it’s a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It’s pretty difficult to avoid, since it’s so natural, but I think the key is when you get surprised like that (and even if you don’t), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don’t go back and audit it.
I think we should also separate the subjects of the psychology behind when this might happen and whether or not we are using scales.
It may indeed be the case that people are bad accountants (although I rarely find myself assuming these implied things, and further if I find that my assumptions are wrong I adjust accordingly), but this doesn’t change the fact that we are adding +/- points (much like you’re keeping score/weighing the two alternatives).
Assuming a perfectly rational mind was approaching the proposition of reactor A vs reactor B (and we can even do reactor C...), then the way it would decide which proposition is best is by tallying the pros/cons to each proposition. Of course, in reality we are not perfectly rational and moreover different people assign different point-values to different categories. But it is still a scale.