Argument 1: superintelligent AI is unlikely, because human-level AI (a necessary step on the path) is unlikely, because that isn’t really what people want (they want more tightly-focused problem solvers) and because there will be regulatory roadblocks on account of ethical concerns; and if we do make human-level AI we’ll probably give it goal structures that make it not want to improve itself recursively.
Argument 2: mind-uploading is unlikely, because lots of people will argue against it for quasi-theological reasons, and because uploaded minds won’t fare well without simulated bodies and simulated physics and so forth.
Argument 3: the simulation argument is unfalsifiable, so let’s ignore it. Also, why would anyone bother simulating their ancestors anyway?
… and his tying together of the three goes like this: by argument 1, we are unlikely to get a singularity via conventional (non-uploading) AI because we won’t be doing the kinds of things that would produce one; by argument 2, we won’t want to make use of uploading in ways that would lead to superintelligent AI because we’d prefer to stay in the real world or something indistinguishable from it; by argument 3, the possibility that we’re in a simulation gives us no reason to expect that we’ll witness a singularity.
I think he’s largely right about the simulation argument, but I don’t think anyone thinks the simulation argument is any reason to expect a singularity soon so I’m not sure why he bothers with it. His other two arguments seem awfully unconvincing to me, for reasons that are probably as obvious to everyone else here as they are to me. (Examples: the existence of regulations doesn’t guarantee that everyone will obey them; something with roughly-human-level intelligence of any sort might well figure out, whatever its goals are, that making itself smarter might be a good way to achieve those goals, so trying to control an AI’s goals is no guarantee of avoiding recursive self-improvement; if we’re able to make uploads at all, we will probably be able to make many of them, running very fast, which is exactly the sort of phenomenon that might lead to an explosion even if the early stages don’t involve anything more intelligent than human beings.)
I don’t think anyone thinks the simulation argument is any reason to expect a singularity soon so I’m not sure why he bothers with it.
Perhaps Stross is treating Singularitarianism as a package of beliefs. Since people who talk about the Singularity also tend to talk about the Simulation Argument, the package of beliefs must contain the belief that we are living in a simulation. Thus any critique of the belief package must address the question of whether we live in a simulation.
I’m not sure exactly what arguments 2 and 3 are.
I think his three arguments are as follows:
Argument 1: superintelligent AI is unlikely, because human-level AI (a necessary step on the path) is unlikely, because that isn’t really what people want (they want more tightly-focused problem solvers) and because there will be regulatory roadblocks on account of ethical concerns; and if we do make human-level AI we’ll probably give it goal structures that make it not want to improve itself recursively.
Argument 2: mind-uploading is unlikely, because lots of people will argue against it for quasi-theological reasons, and because uploaded minds won’t fare well without simulated bodies and simulated physics and so forth.
Argument 3: the simulation argument is unfalsifiable, so let’s ignore it. Also, why would anyone bother simulating their ancestors anyway?
… and his tying together of the three goes like this: by argument 1, we are unlikely to get a singularity via conventional (non-uploading) AI because we won’t be doing the kinds of things that would produce one; by argument 2, we won’t want to make use of uploading in ways that would lead to superintelligent AI because we’d prefer to stay in the real world or something indistinguishable from it; by argument 3, the possibility that we’re in a simulation gives us no reason to expect that we’ll witness a singularity.
I think he’s largely right about the simulation argument, but I don’t think anyone thinks the simulation argument is any reason to expect a singularity soon so I’m not sure why he bothers with it. His other two arguments seem awfully unconvincing to me, for reasons that are probably as obvious to everyone else here as they are to me. (Examples: the existence of regulations doesn’t guarantee that everyone will obey them; something with roughly-human-level intelligence of any sort might well figure out, whatever its goals are, that making itself smarter might be a good way to achieve those goals, so trying to control an AI’s goals is no guarantee of avoiding recursive self-improvement; if we’re able to make uploads at all, we will probably be able to make many of them, running very fast, which is exactly the sort of phenomenon that might lead to an explosion even if the early stages don’t involve anything more intelligent than human beings.)
Perhaps Stross is treating Singularitarianism as a package of beliefs. Since people who talk about the Singularity also tend to talk about the Simulation Argument, the package of beliefs must contain the belief that we are living in a simulation. Thus any critique of the belief package must address the question of whether we live in a simulation.