1. We technologically plateau, due to humanities questionable ability to adapt to accelerating technological progression.
2. AI development is indefinitely disrupted; as it is likely to result in disaster.
This is unlikely to be done deliberately. Ongoing attempts to slow down AI development are relatively ineffective; it is more likely, in my opinion, that a basic form of AI developed in the near future will either directly increase all individuals power, or lead to technologies that do the same. An example would be the possible implications of Palm AI. https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/
This universal increase in power could be sufficient to disrupt all AI research indefinitely, with the actions of a minority working in unison.
All this considered, what’s the most likely situation that could play out?
[Question] How can I reconcile the two most likely requirements for humanities near-term survival.
1. We technologically plateau, due to humanities questionable ability to adapt to accelerating technological progression.
2. AI development is indefinitely disrupted; as it is likely to result in disaster.
This is unlikely to be done deliberately. Ongoing attempts to slow down AI development are relatively ineffective; it is more likely, in my opinion, that a basic form of AI developed in the near future will either directly increase all individuals power, or lead to technologies that do the same. An example would be the possible implications of Palm AI. https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/
This universal increase in power could be sufficient to disrupt all AI research indefinitely, with the actions of a minority working in unison.
All this considered, what’s the most likely situation that could play out?