I’m uncertain. I lean towards the order I’ve written them in as the order of relative importance. However, the regulation thing seems like the biggest uncertainty to me. I don’t feel like I’m good at predicting how people and government will react to things; it’s possible that technological advancement will occur so rapidly and will be celebrated so widely that people won’t want it to stop.
Its quite possible governments don’t really notice arriving AGI until its already there. Especially if the route taken is full of dense technical papers, with not much to impress the nonexpert.
Its also possible that governments want to stop development, but find they basically can’t. Ban AI research and everyone just changes the title from “AI” to “maths” or “programming” and does the same research.
Note also that this technology, by it’s definition (‘transformative’, ‘intelligence’) is so valuable it immediately gives whoever has it an absurd economic, military, even cultural advantage.
Hard to compete with a country that can sell exports below your marginal cost, and of maybe knock off products that have better designs than the product they knocked off. (the low cost is from self replicating robots, the reason the knock off design is better is an AI agent modeled the product being used and explored a large number of possible designs until it found a more reliable/cheaper to make one with similar functionality)
Military is because weapons are mostly a quantity thing, and self replicating robots can’t really be beat there.
And cultural—as you have seen, a lot of cool tricks are possible with various generative AIs. Presumably a ‘transformative’ one could do even cooler tricks.
I’m curious how you’d estimate the relative importance of these factors. Myself, I think one of them really outweighs the others.
I’m uncertain. I lean towards the order I’ve written them in as the order of relative importance. However, the regulation thing seems like the biggest uncertainty to me. I don’t feel like I’m good at predicting how people and government will react to things; it’s possible that technological advancement will occur so rapidly and will be celebrated so widely that people won’t want it to stop.
Its quite possible governments don’t really notice arriving AGI until its already there. Especially if the route taken is full of dense technical papers, with not much to impress the nonexpert.
Its also possible that governments want to stop development, but find they basically can’t. Ban AI research and everyone just changes the title from “AI” to “maths” or “programming” and does the same research.
Note also that this technology, by it’s definition (‘transformative’, ‘intelligence’) is so valuable it immediately gives whoever has it an absurd economic, military, even cultural advantage.
Hard to compete with a country that can sell exports below your marginal cost, and of maybe knock off products that have better designs than the product they knocked off. (the low cost is from self replicating robots, the reason the knock off design is better is an AI agent modeled the product being used and explored a large number of possible designs until it found a more reliable/cheaper to make one with similar functionality)
Military is because weapons are mostly a quantity thing, and self replicating robots can’t really be beat there.
And cultural—as you have seen, a lot of cool tricks are possible with various generative AIs. Presumably a ‘transformative’ one could do even cooler tricks.