One thing I never understood in the internet sphere labelled “rationalists” (LW, OB, SSC… etc) is a series of seemingly strong beliefs about the future and/or about reality, the main one being around “AI”.
Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).
I’ve come to believe that I (and I’m sure many other people) differ from the mainstream (around these parts, that is) in a belief I can best outline as:
“Reason” may not be a determining factor in achiving agency over the material world, but rather, the limiting factor might be resources (inlcuding e.g. the resources needed to faciliatate physical labour, or to faciliate the power supply of a super-computer). What is interpreted as “reason causing an expoential jump in technology”, could and should be interpreted as random luck of experimenting in the right direction, but in hindsight we rationalize it by saying the people exploring that direction “were smarter”. More importantly, science and theoretical models are linked to technological inovation less than people think in the first place (see most of post 19th century physics/chemistry, including things like general relativity, not being required for most technological applications, including those credited to physics science)
I’ve considered writing an article aimed solely at the LW/SSC crowd trying to defend something-like the above proposition with historical evidence, but the few times I tried it was rather tedious. I still want to do so at some point, but I’m curious if anyone wrote this sort of article before, essentially something that boils down to “A defence of a mostly-sceptical take on the world which can easily be digested by someone from the rationalist-blogosphere demographic”
I understand this probably sounds insane to the point of trolling to people here, but please keep an open mind, or at least please grant me that I’m not trolling, the position outlined above would be fairly close to what an empiricist or skeptic would hold, heck, even lightweight, since a skeptic might be skeptic of us being able to gain more knowledge/agency over the outside world in the first place, at least in a non-random way.
[Question] Has anyone on LW written about material bottlenecks being the main factor in making any technological progress?
One thing I never understood in the internet sphere labelled “rationalists” (LW, OB, SSC… etc) is a series of seemingly strong beliefs about the future and/or about reality, the main one being around “AI”.
Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).
I’ve come to believe that I (and I’m sure many other people) differ from the mainstream (around these parts, that is) in a belief I can best outline as:
I’ve considered writing an article aimed solely at the LW/SSC crowd trying to defend something-like the above proposition with historical evidence, but the few times I tried it was rather tedious. I still want to do so at some point, but I’m curious if anyone wrote this sort of article before, essentially something that boils down to “A defence of a mostly-sceptical take on the world which can easily be digested by someone from the rationalist-blogosphere demographic”
I understand this probably sounds insane to the point of trolling to people here, but please keep an open mind, or at least please grant me that I’m not trolling, the position outlined above would be fairly close to what an empiricist or skeptic would hold, heck, even lightweight, since a skeptic might be skeptic of us being able to gain more knowledge/agency over the outside world in the first place, at least in a non-random way.