I buy the automation argument, I don’t expect that to be the (main) bottleneck under short timelines. My worries are about needing to change the behavior of humans (e.g. programmers starting to integrate formal verification, etc.) and to upgrade large parts of existing infrastructure, especially when it’s physical.
I agree that the human responses to all of this are the great unknown. The good news is that human programmers don’t need to learn to do formal verification, they just need to realize that it’s important and to get the AIs to do it. My suspicion is that these big shifts won’t happen until we start having AI-caused disasters. It’s interesting to look at the human response to the CrowdStrike disaster and other recent security issues to get some sense of how the responses might go. It’s also interesting to see the unbelievable speed with which Microsoft, Meta, Google, etc. have been building huge new GPU datacenters in response to the projected advances in AI.
I buy the automation argument, I don’t expect that to be the (main) bottleneck under short timelines. My worries are about needing to change the behavior of humans (e.g. programmers starting to integrate formal verification, etc.) and to upgrade large parts of existing infrastructure, especially when it’s physical.
I agree that the human responses to all of this are the great unknown. The good news is that human programmers don’t need to learn to do formal verification, they just need to realize that it’s important and to get the AIs to do it. My suspicion is that these big shifts won’t happen until we start having AI-caused disasters. It’s interesting to look at the human response to the CrowdStrike disaster and other recent security issues to get some sense of how the responses might go. It’s also interesting to see the unbelievable speed with which Microsoft, Meta, Google, etc. have been building huge new GPU datacenters in response to the projected advances in AI.