I’m not sure of what you meant about studying transistors.
It seems to me to me that if we are studying transistors so hard, it’s to push computers capabilities (faster, smaller, more energy efficient etc.), and not at all to make software safer. Instead to make software safer, we use anti-viruses, automatic testing, developer liability, standards, regulations, pop-up warnings, etc.
I agree, that’s an important point. I probably worry more about your first possibility, as we are already seeing this effect today, and worry less about the second, which would require a level of resignation that I’ve rarely seen. Entities that are responsible would likely try to do something about it, but the ways this “we’re doomed, let’s profit” might happen are:
The warning shot comes from a small player and a bigger player feels urgency or feels threatened, in a situation where they have little control
There is no clear responsibility and there are many entities at the frontier, who think others are responsible and there’s no way to prevent them.
Another case of harmful warning shot is if the lesson learnt from it is “we need stronger AI systems to prevent this”. This probably goes in hand with a poor credit assignment.