What do you think—could AI-powered mind hacks be so powerful that will be itself an x-risk? For example AI generated messages dissolves person’s value system and core believes or even install AI on wetware?
Also, effective wireheading via AI-powered games etc is also a form of mind-hack.
Theoretically, it could be an X-risk, but it’s not neglected, the same way that climate change tipping points aren’t neglected. Maybe it could even reduce the required length of a pause by increasing the pace that our civilization approaches dath ilan or something like that. I’m mainly thinking about it as a roadmap for the 2020s, for any operation happening right now (e.g. AI alignment) that will probably take more than 10 years.
In the middle of the doc, I wrote:
Facebook and the other 4 large tech companies (of whom Twitter/X is not yet a member due to vastly weaker data security) might be testing out their own pro-democracy anti-influence technologies and paradigms, akin to Twitter/X’s open-sourcing its algorithm, but behind closed doors due to the harsher infosec requirements that the big 5 tech companies face. Perhaps there are ideological splits among executives e.g. with some executives trying to find a solution to the influence problem because they’re worried about their children and grandchildren ending up as floor rags in a world ruined by mind control technology, and other executives nihilistically marching towards increasingly effective influence technologies so that they and their children personally have better odds of ending up on top instead of someone else.
What do you think—could AI-powered mind hacks be so powerful that will be itself an x-risk? For example AI generated messages dissolves person’s value system and core believes or even install AI on wetware?
Also, effective wireheading via AI-powered games etc is also a form of mind-hack.
Theoretically, it could be an X-risk, but it’s not neglected, the same way that climate change tipping points aren’t neglected. Maybe it could even reduce the required length of a pause by increasing the pace that our civilization approaches dath ilan or something like that. I’m mainly thinking about it as a roadmap for the 2020s, for any operation happening right now (e.g. AI alignment) that will probably take more than 10 years.
In the middle of the doc, I wrote: