This suggests that a common tactic (deliberate or otherwise) would be to represent your opponents as being the level below you, rather than the level above. For example this article, which treats Singularitarians as at level 1, rather than level 3, on
technology is great! → but it has costs, like to the enviroment, and making social control easier → Actually, the benefits vastly outweigh those.
Ironically, it’s not that far off for SIAI, which is at level 4, ‘certain technologies are existentially dangerous’
This seems to hold true for all the triads you mention, except possibly the medicine one: level 2 people falsely represent level 3 people as level 1.
Existentially dangerous doesn’t mean the benefits still don’t outweigh the costs. If there’s a 95% chance that uFAI kills us all, that’s still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all.
Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details.
Added: Your epistemic rationality is limited by your epistemology. There’s a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is ‘limited resources’. But what if you had unlimited resources? At the limit, where doesn’t Bayes hold?
I’ve noticed that quite often long before seeing this article. There seems to be a strong tendency for people to try to present themselves as breaking old, established stereotypes even when the person they’re arguing against says exactly the same thing, and in some cases where the stereotype has only been around for a very short time (I recall one article arguing against the idea of Afghanistan being “the graveyard of empires”, which in my understanding was an idea that had surfaced around 6 months prior to that article with the publication of a specific book).
However, this does add an interesting dimension to it, with the fact that Type 2 positions actually were founded on a rejection of old, untrue beliefs of Type 1s, and Type 3s often resemble Type 1s. In fact I’d say that in every listed political example, the Type 2s who know about Type 3s will usually lump them in with Type 1s.
This is, IMO, good in a way because it limits us from massive proliferation of levels over and over again and the resulting complications; instead we just get added nuance into the Type 2 and 3 positions.
This suggests that a common tactic (deliberate or otherwise) would be to represent your opponents as being the level below you, rather than the level above. For example this article, which treats Singularitarians as at level 1, rather than level 3, on
technology is great! → but it has costs, like to the enviroment, and making social control easier → Actually, the benefits vastly outweigh those.
Ironically, it’s not that far off for SIAI, which is at level 4, ‘certain technologies are existentially dangerous’
This seems to hold true for all the triads you mention, except possibly the medicine one: level 2 people falsely represent level 3 people as level 1.
Existentially dangerous doesn’t mean the benefits still don’t outweigh the costs. If there’s a 95% chance that uFAI kills us all, that’s still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all.
Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details.
Added: Your epistemic rationality is limited by your epistemology. There’s a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is ‘limited resources’. But what if you had unlimited resources? At the limit, where doesn’t Bayes hold?
I’ve noticed that quite often long before seeing this article. There seems to be a strong tendency for people to try to present themselves as breaking old, established stereotypes even when the person they’re arguing against says exactly the same thing, and in some cases where the stereotype has only been around for a very short time (I recall one article arguing against the idea of Afghanistan being “the graveyard of empires”, which in my understanding was an idea that had surfaced around 6 months prior to that article with the publication of a specific book).
However, this does add an interesting dimension to it, with the fact that Type 2 positions actually were founded on a rejection of old, untrue beliefs of Type 1s, and Type 3s often resemble Type 1s. In fact I’d say that in every listed political example, the Type 2s who know about Type 3s will usually lump them in with Type 1s. This is, IMO, good in a way because it limits us from massive proliferation of levels over and over again and the resulting complications; instead we just get added nuance into the Type 2 and 3 positions.