We are now already in the midst of an AI emergency that has not fully emerged yet even though it is not the AGI/ASI emergency yet. The large language models that have been released into the wilds put power into the hands of humans that use them that has never been trivially accessible to this large a segment of the population before.
If I tell a language model that doing crimes is a good thing as part of its context then it will happily help me design a raid on a bank or figure out how to construct a large scale chemical weapons capacity and those are just cases of trivially scratching the surface of what’s possible without getting into using the models to enhance the nitty gritty aspects of life.
The advent of these models (a horse that can no longer be put back in a barn now that there are multiple of them out in the wilds) is going to be a bigger thing than the invention and deployment of the internet because rather than democratizing information the way the internet did, this democratizes raw intellectual capacity and knowledge processing. Or at least it would be if it had time to fully mature and deploy prior to the even bigger tsunami that’s right behind it.
What we didn’t account for with all the worries about AGI/ASIs was the idea that we could skip right to the ASI part by having humans as the generalists at the bottom of an ANI pyramid.
The ship has already become unmoored and the storm is already here. It’s not as stormy as it’s going to be later, but from here on out, things only get wilder at an exponential scale without ever letting up.
There is now a relatively clear path from where we are to an AGI/ASI outcome and anyone tapping the breaks is only predetermining that they personally won’t be the ones to determine the nature of the outcome rather than having an impact on the arrival of the outcome itself.
We are now already in the midst of an AI emergency that has not fully emerged yet even though it is not the AGI/ASI emergency yet. The large language models that have been released into the wilds put power into the hands of humans that use them that has never been trivially accessible to this large a segment of the population before.
If I tell a language model that doing crimes is a good thing as part of its context then it will happily help me design a raid on a bank or figure out how to construct a large scale chemical weapons capacity and those are just cases of trivially scratching the surface of what’s possible without getting into using the models to enhance the nitty gritty aspects of life.
The advent of these models (a horse that can no longer be put back in a barn now that there are multiple of them out in the wilds) is going to be a bigger thing than the invention and deployment of the internet because rather than democratizing information the way the internet did, this democratizes raw intellectual capacity and knowledge processing. Or at least it would be if it had time to fully mature and deploy prior to the even bigger tsunami that’s right behind it.
What we didn’t account for with all the worries about AGI/ASIs was the idea that we could skip right to the ASI part by having humans as the generalists at the bottom of an ANI pyramid.
The ship has already become unmoored and the storm is already here. It’s not as stormy as it’s going to be later, but from here on out, things only get wilder at an exponential scale without ever letting up.
There is now a relatively clear path from where we are to an AGI/ASI outcome and anyone tapping the breaks is only predetermining that they personally won’t be the ones to determine the nature of the outcome rather than having an impact on the arrival of the outcome itself.