There should be more talk about concentration of power, decoupling from the masses, and social fragmentation around here.
Disruptive technologies amplify the economic inequality gap within countries, and between countries.
New technologies, like automation or farming, quickly benefit a few number of people, basically the owners
As new technology emerges, the general population will see wage stagnation, job displacements, and will be able to get some benefit from the new technology only after a long time
New disruptive technologies tend to directly benefit fewer and fewer people, while a larger side of the population has less to offer
While it is true that often new technologies will increase the benefits for everybody, e.g. more food, on the other hand the gap between people is humongous (e.g. food quality is way different), and who owns the food can steer the choices of the many
At some point, the few people will not need anymore the general population to reach certain goals, and this can go on and on, especially with an exponential increase in technology (just look for the fact that now tech companies basically dwarf most of the others, and in the past century we had manufacturing companies, so these has never reached the power of the tech companies)
Coupled to the fact that human individuals in the most developed countries are becoming more and more individualistic, with capillary technology it will be very easy for powerful players with amazing technology to steer humanity
Which are the consequences of this? I do not see many people discussing it on the forum.
While it is true that the AI itself might doom humanity, I consider human dynamics to be a greater threat (just for the fact that investing in AI can increase the likelihood of these fewer people to own the world, and this might increase the likelihood of humanity AI doom).
This is a better defined problem to the usual AI safety talk. I think it should be something that needs to be addressed asap with public discourse. We need to focus on the right things in my opinion.
It might also have some implicit benefits on usual AI safety (and to be sincere, I do not see current AI research level at the level of research in math or fundamental physics, where first principles approaches and taking your own time to deeply think about problems are preferred, but this is another topic for another day).
There should be more talk about concentration of power, decoupling from the masses, and social fragmentation around here.
Disruptive technologies amplify the economic inequality gap within countries, and between countries.
New technologies, like automation or farming, quickly benefit a few number of people, basically the owners
As new technology emerges, the general population will see wage stagnation, job displacements, and will be able to get some benefit from the new technology only after a long time
New disruptive technologies tend to directly benefit fewer and fewer people, while a larger side of the population has less to offer
While it is true that often new technologies will increase the benefits for everybody, e.g. more food, on the other hand the gap between people is humongous (e.g. food quality is way different), and who owns the food can steer the choices of the many
At some point, the few people will not need anymore the general population to reach certain goals, and this can go on and on, especially with an exponential increase in technology (just look for the fact that now tech companies basically dwarf most of the others, and in the past century we had manufacturing companies, so these has never reached the power of the tech companies)
Coupled to the fact that human individuals in the most developed countries are becoming more and more individualistic, with capillary technology it will be very easy for powerful players with amazing technology to steer humanity
Which are the consequences of this? I do not see many people discussing it on the forum.
While it is true that the AI itself might doom humanity, I consider human dynamics to be a greater threat (just for the fact that investing in AI can increase the likelihood of these fewer people to own the world, and this might increase the likelihood of humanity AI doom).
This is a better defined problem to the usual AI safety talk. I think it should be something that needs to be addressed asap with public discourse. We need to focus on the right things in my opinion.
It might also have some implicit benefits on usual AI safety (and to be sincere, I do not see current AI research level at the level of research in math or fundamental physics, where first principles approaches and taking your own time to deeply think about problems are preferred, but this is another topic for another day).