I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them.
The reason is that it is not very clear to me the exact meaning of “tractable for a SI”. I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very few factories (and for some specific technologies the number of factories is n=1). Will we get there eventually? Yes. But not at the speed that EY fears.
I think you summarised pretty well my position in this paragraph:
“I think another commonviewon LW is that many things are probably possible in principle, but would require potentially large amounts of time, data, resources, etc. to accomplish, which might make some tasks intractable, if not impossible, even for a superintelligence. ”
So I do think that EY believes in “magic” (even more after reading his tweet), but some people might not like the term and I understand that.
In my case using the word magic does not refer only at breaking the laws of physics. Magic might refer to someone who holds such a simplified model of the world that think, that you can make in a matter of days all those factories, machines and working nanotechnology (on the first try) and then succesfully deploy them everywhere killing everyone, and that we will get to that point in a matter of days AND that there won’t be any other SI that could work to prevent those scenarios. I don’t think I am misrepresenting EY point of view here, correct me otherwise,
If someone believed that a good group of engineers working one week in a spacecraft model could succesfuly 30 years later in an asteroid close to Proxima Centaury, would you call it magical thinking? I would. There is nothing beyond the realm of physics here! But it assumes so many things and it is so stupidly optimistic that I would simply dismiss it as nonsense.
A historical analogy could be the invention of computer by Charles Babbage, who couldn’t build a working prototype because the technology of his era did not all allow precision necessary for the components.
The superintelligence could build its own factories, but that would require more time, more action in real world that people might notice, the factory might require some unusual components or raw materials in unusual quantities; some components might even require their own specialized factory, etc.
I wonder, if humanity ever gets to the “can make simulations of our ancestors” phase, whether it will be a popular hobby to do “speedruns” of technological explosion. Like, in the simulation you start as a certain historical character, and your goal is to bring the Singularity or land on Proxima Centauri as soon as possible. You have an access to all technological knowledge of the future (e.g. if you close your eyes, you can read the Wikipedia as of year 2500), but you need to build everything using the resources available in the simulation.
The superintelligence could build its own factories, but that would require more time, more action in real world that people might notice, the factory might require some unusual components or raw materials in unusual quantities; some components might even require their own specialized factory, etc.
People who consider this a serious difficulty are living on a way more competent planet than mine. Even if RearAdmiralAI needed to build new factories or procure exotic materials to defeat humans in a martial conflict, who do you expect to notice or raise the alarm? No monkeys are losing their status in this story until the very end.
I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them.
The reason is that it is not very clear to me the exact meaning of “tractable for a SI”. I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very few factories (and for some specific technologies the number of factories is n=1). Will we get there eventually? Yes. But not at the speed that EY fears.
I think you summarised pretty well my position in this paragraph:
“I think another common view on LW is that many things are probably possible in principle, but would require potentially large amounts of time, data, resources, etc. to accomplish, which might make some tasks intractable, if not impossible, even for a superintelligence. ”
So I do think that EY believes in “magic” (even more after reading his tweet), but some people might not like the term and I understand that.
In my case using the word magic does not refer only at breaking the laws of physics. Magic might refer to someone who holds such a simplified model of the world that think, that you can make in a matter of days all those factories, machines and working nanotechnology (on the first try) and then succesfully deploy them everywhere killing everyone, and that we will get to that point in a matter of days AND that there won’t be any other SI that could work to prevent those scenarios. I don’t think I am misrepresenting EY point of view here, correct me otherwise,
If someone believed that a good group of engineers working one week in a spacecraft model could succesfuly 30 years later in an asteroid close to Proxima Centaury, would you call it magical thinking? I would. There is nothing beyond the realm of physics here! But it assumes so many things and it is so stupidly optimistic that I would simply dismiss it as nonsense.
A historical analogy could be the invention of computer by Charles Babbage, who couldn’t build a working prototype because the technology of his era did not all allow precision necessary for the components.
The superintelligence could build its own factories, but that would require more time, more action in real world that people might notice, the factory might require some unusual components or raw materials in unusual quantities; some components might even require their own specialized factory, etc.
I wonder, if humanity ever gets to the “can make simulations of our ancestors” phase, whether it will be a popular hobby to do “speedruns” of technological explosion. Like, in the simulation you start as a certain historical character, and your goal is to bring the Singularity or land on Proxima Centauri as soon as possible. You have an access to all technological knowledge of the future (e.g. if you close your eyes, you can read the Wikipedia as of year 2500), but you need to build everything using the resources available in the simulation.
People who consider this a serious difficulty are living on a way more competent planet than mine. Even if RearAdmiralAI needed to build new factories or procure exotic materials to defeat humans in a martial conflict, who do you expect to notice or raise the alarm? No monkeys are losing their status in this story until the very end.
The Babbage example is the perfect one. Thank you, I will use it