Thanks for writing this.
I had in mind to express a similar view but wouldn’t have expressed it nearly as well.
In the past two months I’ve gone from over the moon excited about AI to a deep concern.
This is largely because I misunderstood the sentiment around super intelligent AGI .
I thought we were on the same page about utilizing narrow LLM’s to help us solve problems that plague society (ie protein folding.) But what I see cluttered on my timeline and clogging the podcast airwaves was the utter delight at how much closer we are to having an AGI some 6-10x human intelligence.
Wait What? What did I miss.
I thought that kind of rhetoric was isolated to the at worst ungrounded in reality LCD user and at best the radical Kurzweil types. I mean listen to us, are we really needing to argue about what percentage the risk is that human life gets exterminated by AGI?
Let me step off my soap box and address a concern that was illuminated in this piece and one that the biggest AGI proponents should at least ponder.
The concern has to do with the risks of hurting innocent bystanders that won’t get to make the choice about integrating AGI into the equation. Make no doubt, AGI both aligned and non aligned will likely cause an immense disruption on the part of billions of people. At the low scale displacing jobs and at the high getting murdered by an unaligned AGI.
We all know about the consequences of the Industrial Revolution and job displacement but we look back at historical technological advances with appreciation that they lead us to where we are. But are you so sure that AGI is just the next step in that long ascension? To me it looks not to be.
In fact AGI isn’t at all what people want. What we are learning about happiness is that work is incredibly important.
You know who isn’t happy? Retired and/or elderly who find themselves with no role in society and an ever increasing narrowing of friends and acquaintances.
“They will be better with AGI doing everything, trust me, technological progression always enhances”
Are you sure about that?
I have so many philosophical directions I could go to disprove this (happiness is less choice not more) but I will get to the point which is:
You don’t get to decide. Not this time anyway.
It might be worth mentioning the crypto decentralization movement is the exact opposite of AGI. if you are a decentralization enthusiast who wants to bring power away from a centralized few then you should be ashamed to support the AGI premise of a handful of people modifying billions of life’s without their consent.
I will end with this. Your hand has been played.
The AGI enthusiasts have revealed their intentions and it won’t sit well with basically…everyone.
Unless AGI can be attained in the next 1-2 years it’s likely to see one of the biggest push backs our world has ever witnessed. Information spreads fast and you’re already seeing the mainstream pick up on the absurdity of pursuing AGI and. when this technology starts disrupting people’s lives get ready for more than just regulation.
Let’s take a deep breath. Remember AI is to solve problems and life’s tragedies, not create them.
You know who isn’t happy? Retired and/or elderly who find themselves with no role in society and an ever increasing narrowing of friends and acquaintances.
I actually have a theory about this thing which I will probably write my next post on. I think people mix up different things in the concept of “work” and that’s how we get these contradictory impulses. I also think this is relevant to concepts of alignment.
Thanks for writing this. I had in mind to express a similar view but wouldn’t have expressed it nearly as well.
In the past two months I’ve gone from over the moon excited about AI to a deep concern.
This is largely because I misunderstood the sentiment around super intelligent AGI .
I thought we were on the same page about utilizing narrow LLM’s to help us solve problems that plague society (ie protein folding.) But what I see cluttered on my timeline and clogging the podcast airwaves was the utter delight at how much closer we are to having an AGI some 6-10x human intelligence.
Wait What? What did I miss. I thought that kind of rhetoric was isolated to the at worst ungrounded in reality LCD user and at best the radical Kurzweil types. I mean listen to us, are we really needing to argue about what percentage the risk is that human life gets exterminated by AGI?
Let me step off my soap box and address a concern that was illuminated in this piece and one that the biggest AGI proponents should at least ponder.
The concern has to do with the risks of hurting innocent bystanders that won’t get to make the choice about integrating AGI into the equation. Make no doubt, AGI both aligned and non aligned will likely cause an immense disruption on the part of billions of people. At the low scale displacing jobs and at the high getting murdered by an unaligned AGI. We all know about the consequences of the Industrial Revolution and job displacement but we look back at historical technological advances with appreciation that they lead us to where we are. But are you so sure that AGI is just the next step in that long ascension? To me it looks not to be. In fact AGI isn’t at all what people want. What we are learning about happiness is that work is incredibly important.
You know who isn’t happy? Retired and/or elderly who find themselves with no role in society and an ever increasing narrowing of friends and acquaintances.
“They will be better with AGI doing everything, trust me, technological progression always enhances”
Are you sure about that? I have so many philosophical directions I could go to disprove this (happiness is less choice not more) but I will get to the point which is:
You don’t get to decide. Not this time anyway.
It might be worth mentioning the crypto decentralization movement is the exact opposite of AGI. if you are a decentralization enthusiast who wants to bring power away from a centralized few then you should be ashamed to support the AGI premise of a handful of people modifying billions of life’s without their consent.
I will end with this. Your hand has been played. The AGI enthusiasts have revealed their intentions and it won’t sit well with basically…everyone. Unless AGI can be attained in the next 1-2 years it’s likely to see one of the biggest push backs our world has ever witnessed. Information spreads fast and you’re already seeing the mainstream pick up on the absurdity of pursuing AGI and. when this technology starts disrupting people’s lives get ready for more than just regulation.
Let’s take a deep breath. Remember AI is to solve problems and life’s tragedies, not create them.
I actually have a theory about this thing which I will probably write my next post on. I think people mix up different things in the concept of “work” and that’s how we get these contradictory impulses. I also think this is relevant to concepts of alignment.