He starts talking about the future at minute 27 and basically concludes that a singularity scenario is one of two possibilities for the 21st century, the other being collapse. Nothing new, but encouraging to see this increasingly in the mainstream.
Do you believe that the third alternative (business as usual) is that unlikely (for 21st century)? Seems like a damaged idea, not something I’d like popularized.
Currently, we don’t have basic tech for scanning, nor even anything on obvious track (for having both precision and volume speed). Present simulation methods are still in their infancy. From general considerations of technological progress, we can expect that to change eventually, especially if nanotech takes off (for scanning, not simulation). Give it 20-50 years for when tech catches up, and another 10-30 years for reliable simulation of short-term dynamics. Then comes another difficulty: we need to support all brain reconfiguration processes to enable long-term cognition. And then, we need to comfortably graft this with environment, which is probably a little more difficult than writing engines for computer games. Give it 10-20 more years. When the thing starts working, long-term cognitive problems will probably come up and won’t be debugged for some years to come. Overall, (20-50 fixed)+(30-60 development) years, and accounting for planning fallacy and difficulties with making this into a reliable development process (such as funding), say factor of 2, more like 80-170. The hypothesized source of delay is from organizational, engineering and software development difficulties, not inadequacy of technological infrastructure.
Haven’t watched the video, but my vague understanding is that “business is usual” depends on unsustainable rates of growth, pointing toward collapse, and includes constant progress in AI-related technology, pointing toward singularity. The potential vagueness of the boundary between business as usual and collapse might be a problem in talking about which is more likely, though.
Restriction to 21st century is an important element of the question. If you state the same about the next 10 years, then clearly most likely neither extreme will attain.
I wonder whether a continual rear-guard action forestalling collapse would be considered “business as usual” or not. I suspect yes, and that someone sufficiently cynical would say this is what is already happening.
I can imagine alternatives that can’t be considered either singularity, collapse, or business-as-usual—a resource-based economy, for example—but I don’t consider them any more likely than either of the first two. Political trends strongly support collapse.
Last time I went through the “uncertain future” app, I believe my beliefs calculated out to something like a 95% chance of extremely significant change by 2070.
Do you believe that the third alternative (business as usual) is that unlikely (for 21st century)? Seems like a damaged idea, not something I’d like popularized.
Interesting—you’d assign a significant plausibility to “business as usual” for the next 90 years?
Yes. Much less for 200 years, where I expect ems to speed up progress, if nothing else happens before.
I will be a little surprised if ems take over fifty years (given business as usual), but I guess ninety is not beyond the bounds of possibility.
Currently, we don’t have basic tech for scanning, nor even anything on obvious track (for having both precision and volume speed). Present simulation methods are still in their infancy. From general considerations of technological progress, we can expect that to change eventually, especially if nanotech takes off (for scanning, not simulation). Give it 20-50 years for when tech catches up, and another 10-30 years for reliable simulation of short-term dynamics. Then comes another difficulty: we need to support all brain reconfiguration processes to enable long-term cognition. And then, we need to comfortably graft this with environment, which is probably a little more difficult than writing engines for computer games. Give it 10-20 more years. When the thing starts working, long-term cognitive problems will probably come up and won’t be debugged for some years to come. Overall, (20-50 fixed)+(30-60 development) years, and accounting for planning fallacy and difficulties with making this into a reliable development process (such as funding), say factor of 2, more like 80-170. The hypothesized source of delay is from organizational, engineering and software development difficulties, not inadequacy of technological infrastructure.
Haven’t watched the video, but my vague understanding is that “business is usual” depends on unsustainable rates of growth, pointing toward collapse, and includes constant progress in AI-related technology, pointing toward singularity. The potential vagueness of the boundary between business as usual and collapse might be a problem in talking about which is more likely, though.
Restriction to 21st century is an important element of the question. If you state the same about the next 10 years, then clearly most likely neither extreme will attain.
I wonder whether a continual rear-guard action forestalling collapse would be considered “business as usual” or not. I suspect yes, and that someone sufficiently cynical would say this is what is already happening.
I can imagine alternatives that can’t be considered either singularity, collapse, or business-as-usual—a resource-based economy, for example—but I don’t consider them any more likely than either of the first two. Political trends strongly support collapse.
Last time I went through the “uncertain future” app, I believe my beliefs calculated out to something like a 95% chance of extremely significant change by 2070.