The glorious abundant technological future is waiting. Let’s muster the best within ourselves—the best of our courage and the best of our rationality—and go build it.
I’m confused about exactly what this post is arguing we should be trying to build. AI? Other technology which would independently result in a singularity? Approaches to building AI safely? Sufficient philosophical and moral progress such that we know what to do with techno utopia?
The effect of working on building technologies (AI and otherwise) which produce the techo utopia is mostly to speed up when techno utopia happens. This could be good under various empirical and moral views, but this seems like a complex question. (E.g. how much do you value currently existing people reaching techno utopia, how much exogenous non-utopia related risk is there, etc.)
I agree that there is a bunch of other stuff to build (AI safety, sufficient philosophical and moral progress) which we need to do in order to unlock the full value of techno utopia, but it seems strange to describe to describe this as “building techno utopia” as all of this stuff is more about avoiding obstacles and utilizing abundance well rather than actually building the technology.
While the community sees the potential of, and earnestly hopes for, a glorious abundant technological future, it is mostly focused not on what we can build but on what might go wrong. The overriding concern is literally the risk of extinction for the human race. Frankly, it’s exhausting.
It might be exhausting, but this seems unrelated to whether or not it’s the best thing to focus on under various empirical and moral views?
Perhaps you don’t dispute this, but you do want to reframe what people are working on rather than changing what they work on?
E.g. reframe “I’m working on ensuring that AI is built carefully and safely”[1] to “I’m working on building the glorious techo utopia”. Totally fair if so, but it might be good to be clear about this.
This is generally my overall objection to progress: it seems unclear if generally pushing technological progress is good and minimally I would guess that there are much better things to be pushing (under my empirical views about the likelihood of an AI related singularity in the next 100 years).
I’m confused about exactly what this post is arguing we should be trying to build. AI? Other technology which would independently result in a singularity? Approaches to building AI safely? Sufficient philosophical and moral progress such that we know what to do with techno utopia?
The effect of working on building technologies (AI and otherwise) which produce the techo utopia is mostly to speed up when techno utopia happens. This could be good under various empirical and moral views, but this seems like a complex question. (E.g. how much do you value currently existing people reaching techno utopia, how much exogenous non-utopia related risk is there, etc.)
Perhaps you disagree (with at least me) about the fundamental dynamics around singularities and human obsolescence?
I agree that there is a bunch of other stuff to build (AI safety, sufficient philosophical and moral progress) which we need to do in order to unlock the full value of techno utopia, but it seems strange to describe to describe this as “building techno utopia” as all of this stuff is more about avoiding obstacles and utilizing abundance well rather than actually building the technology.
It might be exhausting, but this seems unrelated to whether or not it’s the best thing to focus on under various empirical and moral views?
Perhaps you don’t dispute this, but you do want to reframe what people are working on rather than changing what they work on?
E.g. reframe “I’m working on ensuring that AI is built carefully and safely”[1] to “I’m working on building the glorious techo utopia”. Totally fair if so, but it might be good to be clear about this.
Supposing this is the best thing to work on which may or may not be true for a given person
This is generally my overall objection to progress: it seems unclear if generally pushing technological progress is good and minimally I would guess that there are much better things to be pushing (under my empirical views about the likelihood of an AI related singularity in the next 100 years).