a lot of people have strong low-level assumptions here that a world with lots of strong AIs must go haywire.
For myself, it seems clear that the world has ALREADY gone haywire. Individual humans have lost control of most of our lives—we interact with policies, faceless (or friendly but volition-free) workers following procedure, automated systems, etc. These systems are human-implemented, but in most cases too complex to be called human-controlled. Moloch won.
Big corporations are a form of inhuman intelligence, and their software and operations have eaten the world. AI pushes this well past a tipping point. It’s probably already irreversable without a major civilizational collapse, but it can still get … more so.
in worlds where AI systems have strong epistemics without critical large gaps, and can generally be controlled / aligned, things will be fine.
I don’t have good working definitions of “controlled/aligned” that would make this true. I don’t see any large-scale institutions or groups large and sane enough to have a reasonable CEV, so I don’t know what an AI could align with or be controlled by.
I feel like you’re talking in highly absolutist terms here.
Global wealth is $454.4 trillion. We currently have ~8 Bil humans, with an average happiness of say 6⁄10. Global wealth and most other measures of civilization flourishing that I know of seem to be generally going up over time.
I think that our world makes a lot of mistakes and fails a lot at coordination. It’s very easy for me to imagine that we could increase global wealth by 3x if we do a decent job.
So how bad are things now? Well, approximately, “We have the current world, at $454 Trillion, with 8 billion humans, etc”. To me that’s definitely something to work with.
I feel like you’re talking in highly absolutist terms here.
You’re correct, and I apologize for that. There are plenty of potential good outcomes where individual autonomy reverses the trend of the last ~70 years. Or where the systemic takeover plateaus at the current level, and the main change is more wealth and options for individuals. Or where AI does in fact enable many/most individual humans to make meaningful decisions and contributions where they don’t today.
I mostly want to point out that many disempowerment/dystopia failure scenarios don’t require a step-change from AI, just an acceleration of current trends.
I mostly want to point out that many disempowerment/dystopia failure scenarios don’t require a step-change from AI, just an acceleration of current trends.
Do you think that the world is getting worse each year?
My rough take is that humans, especially rich humans, are generally more and more successful.
I’m sure there are ways for current trends to lead to catastrophe—line some trends dramatically increasing and others decreasing, but that seems like it would require a lengthy and precise argument.
Do you think that the world is getting worse each year?
Good clarification question! My answer probably isn’t satisfying, though. “It’s complicated” (meaning: multidimensional and not ordinally comparable).
On a lot of metrics, it’s better by far, for most of the distribution. On harder-to-operationally-define dimensions (sense of hope and agency for the 25th through 75th percentile of culturally normal people), it’s quite a bit worse.
> On harder-to-operationally-define dimensions (sense of hope and agency for the 25th through 75th percentile of culturally normal people), it’s quite a bit worse.
I think it’s likely that many people are panicking and losing hope each year. There’s a lot of grim media around.
I’m far less sold that something like “civilizational agency” is declining. From what I can tell, companies have gotten dramatically better at achieving their intended ends in the last 30 years, and most governments have generally been improving in effectiveness.
One challenge I’d have for you / others who feel similar to you, is to try to get more concrete on measures like this, and then to show that they have been declining.
My personal guess is that a bunch of people are incredibly anxious over the state of the world, largely for reasons of media attention, and then this spills over into them assuming major global ramifications without many concrete details or empirical forecasts.
One challenge I’d have for you / others who feel similar to you, is to try to get more concrete on measures like this, and then to show that they have been declining.
I’ve given some thought to this over the last few decades, and have yet to find ANY satisfying measures, let alone a good set. I reject the trap of “if it’s not objective and quantitative, it’s not important”—that’s one of the underlying attitudes causing the decline.
I definitely acknowledge that my memory of the last quarter of the previous century is fuzzy and selective, and beyond that is secondhand and not-well-supported. But I also don’t deny my own experience that the (tiny subset of humanity) people I am aware of as individuals have gotten much less hopeful and agentic over time. This may well be for reasons of media attention, but that doesn’t make it not real.
For myself, it seems clear that the world has ALREADY gone haywire. Individual humans have lost control of most of our lives—we interact with policies, faceless (or friendly but volition-free) workers following procedure, automated systems, etc. These systems are human-implemented, but in most cases too complex to be called human-controlled. Moloch won.
Big corporations are a form of inhuman intelligence, and their software and operations have eaten the world. AI pushes this well past a tipping point. It’s probably already irreversable without a major civilizational collapse, but it can still get … more so.
I don’t have good working definitions of “controlled/aligned” that would make this true. I don’t see any large-scale institutions or groups large and sane enough to have a reasonable CEV, so I don’t know what an AI could align with or be controlled by.
I feel like you’re talking in highly absolutist terms here.
Global wealth is $454.4 trillion. We currently have ~8 Bil humans, with an average happiness of say 6⁄10. Global wealth and most other measures of civilization flourishing that I know of seem to be generally going up over time.
I think that our world makes a lot of mistakes and fails a lot at coordination. It’s very easy for me to imagine that we could increase global wealth by 3x if we do a decent job.
So how bad are things now? Well, approximately, “We have the current world, at $454 Trillion, with 8 billion humans, etc”. To me that’s definitely something to work with.
You’re correct, and I apologize for that. There are plenty of potential good outcomes where individual autonomy reverses the trend of the last ~70 years. Or where the systemic takeover plateaus at the current level, and the main change is more wealth and options for individuals. Or where AI does in fact enable many/most individual humans to make meaningful decisions and contributions where they don’t today.
I mostly want to point out that many disempowerment/dystopia failure scenarios don’t require a step-change from AI, just an acceleration of current trends.
Do you think that the world is getting worse each year?
My rough take is that humans, especially rich humans, are generally more and more successful.
I’m sure there are ways for current trends to lead to catastrophe—line some trends dramatically increasing and others decreasing, but that seems like it would require a lengthy and precise argument.
Good clarification question! My answer probably isn’t satisfying, though. “It’s complicated” (meaning: multidimensional and not ordinally comparable).
On a lot of metrics, it’s better by far, for most of the distribution. On harder-to-operationally-define dimensions (sense of hope and agency for the 25th through 75th percentile of culturally normal people), it’s quite a bit worse.
Thanks for the specificity!
> On harder-to-operationally-define dimensions (sense of hope and agency for the 25th through 75th percentile of culturally normal people), it’s quite a bit worse.
I think it’s likely that many people are panicking and losing hope each year. There’s a lot of grim media around.
I’m far less sold that something like “civilizational agency” is declining. From what I can tell, companies have gotten dramatically better at achieving their intended ends in the last 30 years, and most governments have generally been improving in effectiveness.
One challenge I’d have for you / others who feel similar to you, is to try to get more concrete on measures like this, and then to show that they have been declining.
My personal guess is that a bunch of people are incredibly anxious over the state of the world, largely for reasons of media attention, and then this spills over into them assuming major global ramifications without many concrete details or empirical forecasts.
I’ve given some thought to this over the last few decades, and have yet to find ANY satisfying measures, let alone a good set. I reject the trap of “if it’s not objective and quantitative, it’s not important”—that’s one of the underlying attitudes causing the decline.
I definitely acknowledge that my memory of the last quarter of the previous century is fuzzy and selective, and beyond that is secondhand and not-well-supported. But I also don’t deny my own experience that the (tiny subset of humanity) people I am aware of as individuals have gotten much less hopeful and agentic over time. This may well be for reasons of media attention, but that doesn’t make it not real.