Avoiding all such knowledge is a perfect precommitment strategy.
Not plausible: it would necessarily entail you avoiding “good” knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.
By the way, are there lower and upper bounds on number of paperclips in the universe?
Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.
Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition—it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they’d do what they threat anyway is one of such common filters.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
minimum mass of a true paperclip
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
Humans are just amazing at refusing to acknowledge existence of evidence.
And those humans would be the reflectively inconsistent ones.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
Not as judged from the standpoint of reflective equilibrium.
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).
And those humans would be the reflectively inconsistent ones.
Wait, you didn’t know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.
Not plausible: it would necessarily entail you avoiding “good” knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.
Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.
Up to the level of precision we can handle, yes.
Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition—it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they’d do what they threat anyway is one of such common filters.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
And those humans would be the reflectively inconsistent ones.
Not as judged from the standpoint of reflective equilibrium.
I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).
Wait, you didn’t know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.