M.M.Maas—AI Governance at CSER
MMMaas
Sarah Hooker’s concept of a ‘Hardware Lottery’ in early AI research might suit some of your criteria, though it was not a permanent lockin really—http://arxiv.org/abs/2009.06489
I enjoyed this investigation a lot; it’s fascinating to think of the uses to which this could have been put.
You may be interested in a related (ongoing) project I’ve been working on, to survey ‘paths untaken’—cases of historical technological delay, restraint, or post-development abandonment, and to try and assess their rationales or contributing factors. So far, it includes about 160 candidate cases. Many of these need much further analysis and investigation, but you can find the preliminary longlist of cases at https://airtable.com/shrVHVYqGnmAyEGsz/tbl7LczhShIesRi0j , and an initial writeup and pitch of the project at: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological -- I’d be interested to hear any thoughts or comments.
thanks for sharing this! this fits in quite well with an ongoing research project I’ve been doing, into the history of technological restraint (with lessons for advanced AI). See primer at https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological & in-progress list of cases at https://airtable.com/shrVHVYqGnmAyEGsz/tbl7LczhShIesRi0j -- I’ll be curious to return to these cases soon.
In case of interest, I’ve been conducting AI strategy research with CSER’s AI-FAR group, amongst others a project to survey historical cases of (unilaterally decided; coordinated; or externally imposed) technological restraint/delay, and their lessons for AGI strategy (in terms of differential technological development, or ‘containment’).
(see longlist of candidate case studies, including a [subjective] assessment of the strength of restraint, and the transferability to the AGI case)
https://airtable.com/shrVHVYqGnmAyEGsz
This is still in-progress work, but will be developed into a paper / post within the next month or so.
---
One avenue that I’ve recently gotten interested in, though I’ve only just gotten to read about it and have large uncertainties about it, is the phenomenon of ‘hardware lotteries’ in the historical development of machine learning—see https://arxiv.org/abs/2009.06489 -- to describe cases were the development of particular types of domain specialized compute hardware make it more costly [especially for e.g. academic researchers, probably less so for private labs] to pursue particular new research directions.
Thanks for this in-depth review, I enjoyed it a lot!
As a sub-distinction between agrarian societies, you might also be interested in this review by Sarah Constantin—https://srconstantin.wordpress.com/2017/09/13/hoe-cultures-a-type-of-non-patriarchal-society/ -- discussing how pre-modern cultures that farmed by plow (=more productive per unit land, but requiring more intense upper-body strength), ended up having very distinct [and more unequal] gender roles compared to cultures that farmed by hoe (=more productive per hour of labour, but requires vast amounts of land for a small population) -- differences that persist to this day.
On the question of projecting this argument forward into the future—you might be interested to read some of the work (papers and blogs) of the philosopher John Danaher, who explicitly draws on Morris’ model in discussing a theory of ‘axiological futurism’ (the study of the future of values), along with ideas linkingtechnology and moral revolutions.
Review of Morris: https://philosophicaldisquisitions.blogspot.com/2016/03/the-evolution-of-social-values-from.html
AI’s future impact on societal values: https://philosophicaldisquisitions.blogspot.com/2018/09/artificial-intelligence-and.html
Axiological Futurism:
https://philosophicaldisquisitions.blogspot.com/2021/06/axiological-futurism-systematic-study.html
paper: https://www.sciencedirect.com/science/article/pii/S0016328721000884
Nice, thanks for collating these!
Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological
and somewhat older:
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022.https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021.https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.