Perhaps, but I’d guess only in a rather indirect way. If there’s some manufacturing process that the company invests in improving in order to make their chips, and that manufacturing process happens to be useful for matrix multiplication, then yes, that could contribute.
But it’s worth noting how many things would be considered AGI risks by such a standard; basically the entire supply chain for computers, and anyone who works for or with top labs; the landlords that rent office space to DeepMind, the city workers that keep the lights on and the water running for such orgs (and their suppliers), etc.
I wouldn’t worry your friends too much about it unless they are contributing very directly to something that has a clear path to improving AI.
May I ask why you think AGI won’t contain an important computationally-constrained component which is not a neural network?
Is it because right now neural networks seem to be the most useful thing? (This does not feel reassuring, but I’d be happy for help making sense of it)
Is this an AGI risk?
A company that makes CPUs that run very quickly but don’t do matrix multiplication or other things that are important for neural networks.
Context: I know people who work there
Perhaps, but I’d guess only in a rather indirect way. If there’s some manufacturing process that the company invests in improving in order to make their chips, and that manufacturing process happens to be useful for matrix multiplication, then yes, that could contribute.
But it’s worth noting how many things would be considered AGI risks by such a standard; basically the entire supply chain for computers, and anyone who works for or with top labs; the landlords that rent office space to DeepMind, the city workers that keep the lights on and the water running for such orgs (and their suppliers), etc.
I wouldn’t worry your friends too much about it unless they are contributing very directly to something that has a clear path to improving AI.
Thanks
May I ask why you think AGI won’t contain an important computationally-constrained component which is not a neural network?
Is it because right now neural networks seem to be the most useful thing? (This does not feel reassuring, but I’d be happy for help making sense of it)
Metaculus has a question about whether the first AGI will be based on deep learning. The crowd estimate right now is at 85%.
I interpret that to mean that improvements to neural networks (particulary on the hardware side) are most likely to drive progress towards AGI.