An independent researcher/blogger/philosopher about intelligence and agency (esp. Active Inference), alignment, ethics, interaction of the AI transition with the sociotechnical risks (epistemics, economics, human psychology), collective mind architecture, research strategy and methodology.
Twitter: https://twitter.com/leventov. E-mail: leventov.ru@gmail.com (the preferred mode of communication). I’m open to collaborations and work.
Presentations at meetups, workshops and conferences, some recorded videos.
I’m a founding member of the Gaia Consoritum, on a mission to create a global, decentralised system for collective sense-making and decision-making, i.e., civilisational intelligence. Drop me a line if you want to learn more about it and/or join the consoritum.
You can help to boost my sense of accountability and give me a feeling that my work is valued by becoming a paid subscriber of my Substack (though I don’t post anything paywalled; in fact, on this blog, I just syndicate my LessWrong writing).
For Russian speakers: русскоязычная сеть по безопасности ИИ, Telegram group.
I agree this is unfortunate, but this also seems irrelevant? Academic economics (as well as sociology, political science, anthropology, etc.) are approximately completely irrelevant to shaping major governments’ AI policies. “Societal preparedness” and “governance” teams at major AI labs and BigTech giants seem to have approximately no influence on the concrete decisions and strategies of their employers.
The last economist who influenced the economic and policy trajectory significantly was Milton Friedman perhaps?
If not research, what can affect the economic and policy trajectory at all in a deliberate way (disqualifying the unsteerable memetic and cultural drift forces), apart from powerful leaders themselves (Xi, Trump, Putin, Musk, etc.)? Perhaps the way we explore the “technology tree” (see https://michaelnotebook.com/optimism/index.html)? Such as the internet, social media, blockchain, form factors of AI models, etc. I don’t hold too much hope here, but this looks to me like the only plausible lever.