Awesome post, love the perspective. I’ve generally thought in these lines as well and it was some of the most convincing arguments for working in AI safety when I switched (“Copilot is already writing 20% of my code, what will happen next?”).
I do agree with other comments that Oracle AI takeover is plausible but will say that a strong code generation tool seems to have better chances and to me seem to arrive parallel to conscious chatbots, i.e. there’s currently more incentive to create code generation tools than chatbots and the chatbots that have virtual assistant-like capabilities seem easier to make as code generation tools (e.g. connecting to Upwork APIs for posting a job).
And as you well mention, converting engineers is much easier with this framing as well and allows us to relate better to the field of AI capabilities, though we might want to just add it to our arsenal of argumentation rather than replace our framing completely ;) Thank you for posting!
Awesome post, love the perspective. I’ve generally thought in these lines as well and it was some of the most convincing arguments for working in AI safety when I switched (“Copilot is already writing 20% of my code, what will happen next?”).
I do agree with other comments that Oracle AI takeover is plausible but will say that a strong code generation tool seems to have better chances and to me seem to arrive parallel to conscious chatbots, i.e. there’s currently more incentive to create code generation tools than chatbots and the chatbots that have virtual assistant-like capabilities seem easier to make as code generation tools (e.g. connecting to Upwork APIs for posting a job).
And as you well mention, converting engineers is much easier with this framing as well and allows us to relate better to the field of AI capabilities, though we might want to just add it to our arsenal of argumentation rather than replace our framing completely ;) Thank you for posting!