I don’t understand why everybody seems to think it’s desirable for there to keep being jobs or to have humans “empowered”. If AI runs the world better than humans, and also provides humans with material wealth and the ability to pursue whatever hobbies they feel like, that seems like a huge win on every sane metric. Sign me up for the parasitic uberpet class.
I am scared of the idea of very powerful AI taking orders from humans, either individually or through some political process. Maybe more scared of that than of simply being paperclipped. It seems self-evidently hideously dangerous.
Yet an awful lot of people seem obsessed with avoiding the former and ensuring the latter.
I didn’t say this, but my primary motivation for the question actually has more to do with surviving the economic transition process: if-and-when we get to a UBI-fueled post-scarcity economy, a career becomes just a hobby that also incidentally upgrades your lifestyle somewhat. However, depending on how fast the growth rates during the AGI economic transition are, how fast the government/sovereign AI puts UBI in place, and so forth, the transition could be long-drawn out, turbulent, and even unpleasant, even if we eventually reach a Good End. While personally navigating that period, understanding categories of jobs more or less safe from AGI competition seems like it could be very valuable.
The most dangerous currently on Earth, yes. That AI which picked up unaligned behaviors from human bad examples could be extremely dangerous, yes (I’ve written other posts about that). That that’s the only possibility we need to worry about, I disagree — paperclip maximizers are also quite a plausible concern and are absolutely an x-risk.
True… I don’t know why i used the word ‘only’ there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a ‘singularity’. Every step humanity is taking to develop AI feels like a huge leap of faith now.
Personally, I’m optimistic, or at least unworried, but that’s probably partly because i know I’m going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don’t think a superintelligence would choose a path that humans would be clearly resistant to, when it could simply incentivize us to do voluntarily do what it wants. Humans are far easier to deal with when they’re duped into doing something they think they want to do. And it shouldn’t be that hard for a superintelligence to figure out how to manipulate us that way. Using force or fear to control humans is probably the least efficient option.
I also have little doubt that corporations and state actors are already exploring how to use gpt-type ai for e.g. propaganda
and other kinds of social and psychological manipulation. I mean that’s what marketing is and algorithms designed to manipulate our behavior already drive the internet.
I don’t understand why everybody seems to think it’s desirable for there to keep being jobs or to have humans “empowered”. If AI runs the world better than humans, and also provides humans with material wealth and the ability to pursue whatever hobbies they feel like, that seems like a huge win on every sane metric. Sign me up for the parasitic uberpet class.
I am scared of the idea of very powerful AI taking orders from humans, either individually or through some political process. Maybe more scared of that than of simply being paperclipped. It seems self-evidently hideously dangerous.
Yet an awful lot of people seem obsessed with avoiding the former and ensuring the latter.
I didn’t say this, but my primary motivation for the question actually has more to do with surviving the economic transition process: if-and-when we get to a UBI-fueled post-scarcity economy, a career becomes just a hobby that also incidentally upgrades your lifestyle somewhat. However, depending on how fast the growth rates during the AGI economic transition are, how fast the government/sovereign AI puts UBI in place, and so forth, the transition could be long-drawn out, turbulent, and even unpleasant, even if we eventually reach a Good End. While personally navigating that period, understanding categories of jobs more or less safe from AGI competition seems like it could be very valuable.
Humans are the most destructive entity on earth and my only fear with ai is that it ends up being too human.
The most dangerous currently on Earth, yes. That AI which picked up unaligned behaviors from human bad examples could be extremely dangerous, yes (I’ve written other posts about that). That that’s the only possibility we need to worry about, I disagree — paperclip maximizers are also quite a plausible concern and are absolutely an x-risk.
True… I don’t know why i used the word ‘only’ there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a ‘singularity’. Every step humanity is taking to develop AI feels like a huge leap of faith now.
Personally, I’m optimistic, or at least unworried, but that’s probably partly because i know I’m going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don’t think a superintelligence would choose a path that humans would be clearly resistant to, when it could simply incentivize us to do voluntarily do what it wants. Humans are far easier to deal with when they’re duped into doing something they think they want to do. And it shouldn’t be that hard for a superintelligence to figure out how to manipulate us that way. Using force or fear to control humans is probably the least efficient option.
I also have little doubt that corporations and state actors are already exploring how to use gpt-type ai for e.g. propaganda and other kinds of social and psychological manipulation. I mean that’s what marketing is and algorithms designed to manipulate our behavior already drive the internet.
This was intended as agreement with the post it’s replying to.