A more complex hypothesis is Meta doesn’t actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.
I think that this particular dystopian outcome is Moloch, not “aimed” malevolence; I think that being “aimed at a dystopian outcome” is an oversimplification of the two-level game; complex internal conflict within the company, parallel to external conflict with other companies. For example, stronger AI and stronger brain-hacking/autoanalysis allows them to reduce the risk of users dropping below 2 hours of use per day (giving their platform a moat and securing the value of the company), while simultaneously reducing the risk of users spending 4+ hours per day which causes watchdog scrutiny, more AI means more degrees of freedom to reap the benefits of addiction with less of the unsightly bits.
Facebook and the other 4 large tech companies (of whom Twitter/X is not yet a member due to vastly weaker data security and dominance by botnets) might be testing out their own pro-democracy anti-influence technologies and paradigms, akin to Twitter/X’s open-sourcing its algorithm, but behind closed doors due to the harsher infosec requirements that the big 5 tech companies face. Perhaps there are ideological splits among executives e.g. with some executives trying to find a solution to the influence problem because they’re worried about their children and grandchildren ending up as floor rags in a world ruined by mind control technology, and other executives nihilistically marching towards increasingly effective influence technologies so that they and their children personally have better odds of ending up on top instead of someone else.
Likewise, even if companies in both the US and China seem to currently eschew brain-hacking paradigms, they might reverse course at any time, especially if brain-hacking truly is the superior move for a company or government to make in the context of the current multimodal-based ML paradigm, especially in the current Cold-War style affairs for US-China.
Your and Gwern’s “commoditize the complement” point is now a very helpful gear in my model, both for targeted influence tech and for modelling the US and Chinese tech industries more generally, thank you. Also, I had either forgotten or failed to realize that a thriving community of human creators allows for more intense influence strategies to be discovered by multi-armed bandit algorithms, rather than just being algorithmically bottlenecked or user/sensor data bottlenecked.
Strong upvoted.
I think that this particular dystopian outcome is Moloch, not “aimed” malevolence; I think that being “aimed at a dystopian outcome” is an oversimplification of the two-level game; complex internal conflict within the company, parallel to external conflict with other companies. For example, stronger AI and stronger brain-hacking/autoanalysis allows them to reduce the risk of users dropping below 2 hours of use per day (giving their platform a moat and securing the value of the company), while simultaneously reducing the risk of users spending 4+ hours per day which causes watchdog scrutiny, more AI means more degrees of freedom to reap the benefits of addiction with less of the unsightly bits.
I’ve previously described a hypothetical scenario where:
Likewise, even if companies in both the US and China seem to currently eschew brain-hacking paradigms, they might reverse course at any time, especially if brain-hacking truly is the superior move for a company or government to make in the context of the current multimodal-based ML paradigm, especially in the current Cold-War style affairs for US-China.
Your and Gwern’s “commoditize the complement” point is now a very helpful gear in my model, both for targeted influence tech and for modelling the US and Chinese tech industries more generally, thank you. Also, I had either forgotten or failed to realize that a thriving community of human creators allows for more intense influence strategies to be discovered by multi-armed bandit algorithms, rather than just being algorithmically bottlenecked or user/sensor data bottlenecked.