mm.. I gave the wrong impression there; my actual boss doesn’t have a huge opinion on AI; in fact he’ll take some convincing.
I should state my assumptions:
software engineering will be completely automated in the next 3 years
in the beginning and maybe for a while, it will require advanced models and workflows
the workflows will be different enough between companies that it’s worthwhile to employ some well paid engineers at each company to maintain them.
these engineers will have a much easier time finding a well paying job than ‘regular’ software engineers
while this is going on, consulting and SaaS companies will be (successfully) booting up efforts to replace software engineers with paid products.
So at some point, my employer (whoever they are at the time) will have to choose between retaining me, and paying an AI-pipeline-maintenance vendor.
Or maybe whoever I work for at the time gets outcompeted by companies that use advanced AI workflows to generate software, then I get laid off and also don’t have the kind of experience necessary to work for the competitor
If you don’t think my assumptions hold then you should think your career is safe. If they do hold, there’s still the possibility of noticing later, and reacting by retooling to remain employable. But if you don’t notice in time, there’s nothing your boss (or the CTO for that matter) can do to help you. which is why I need to bulid this knowledge into my career by applying it; get it on the resume, prove the value IRL.
Nice. So something like grabbing a copy of swebench dataset, writing a pipeline that would solve those issues, then putting that on your CV?
I will say though that your value as an employee is not ‘producing software’ so much as solving business problems. How much conviction do you have that producing software marginally faster using AI will improve your value to your firm?
I think part of the important part is building your own (company’s) collection of examples to train against, since the foundation models are trained against swebench already. And if it works the advantage would be on my CV in the worst case but in equity appreciation in the best case. So, just like any skill, right?
You’re right that the whole thing only works if the business can generate returns to high quality code, and can write specifications faster than its complement of engineers can implement them. But I’ve been in that position several times, it does happen. Mainly when the core functionality of the product is designed and led by domain experts who are not software engineers. Like if you make software for accountants for instance.
The reasons you give btw don’t give me much consolation. The code leaking thing is very temporary; if you could host cutting edge models on AWS or Azure it wouldn’t be an issue for most companies. If you could self host them it wouldn’t be an issue for almost /any/ companies. The errors thing is a crux. The basic solution to that, I think, is scaling: multishot the problem, rank the solutions, test in every way imaginable, and then for each solved problem optimize your prompts till they can one-shot, keeping a backlog of examples to perform workflow regression testing against.
The style thing is very tractable, AIs love following style instructions.
The big moment for me was realizing that while each AI’s context window is limited, within that window you can ask LOTS of different questions and expect a pretty good answer. So you ask questions that compress the information in the window for the purpose of your problem (llm’s are pretty darn good at summarizing), and keep doing that until you have enough context to solve the problem.
mm.. I gave the wrong impression there; my actual boss doesn’t have a huge opinion on AI; in fact he’ll take some convincing.
I should state my assumptions:
software engineering will be completely automated in the next 3 years
in the beginning and maybe for a while, it will require advanced models and workflows
the workflows will be different enough between companies that it’s worthwhile to employ some well paid engineers at each company to maintain them.
these engineers will have a much easier time finding a well paying job than ‘regular’ software engineers
while this is going on, consulting and SaaS companies will be (successfully) booting up efforts to replace software engineers with paid products.
So at some point, my employer (whoever they are at the time) will have to choose between retaining me, and paying an AI-pipeline-maintenance vendor.
Or maybe whoever I work for at the time gets outcompeted by companies that use advanced AI workflows to generate software, then I get laid off and also don’t have the kind of experience necessary to work for the competitor
If you don’t think my assumptions hold then you should think your career is safe. If they do hold, there’s still the possibility of noticing later, and reacting by retooling to remain employable. But if you don’t notice in time, there’s nothing your boss (or the CTO for that matter) can do to help you. which is why I need to bulid this knowledge into my career by applying it; get it on the resume, prove the value IRL.
Nice. So something like grabbing a copy of swebench dataset, writing a pipeline that would solve those issues, then putting that on your CV?
I will say though that your value as an employee is not ‘producing software’ so much as solving business problems. How much conviction do you have that producing software marginally faster using AI will improve your value to your firm?
I think part of the important part is building your own (company’s) collection of examples to train against, since the foundation models are trained against swebench already. And if it works the advantage would be on my CV in the worst case but in equity appreciation in the best case. So, just like any skill, right?
You’re right that the whole thing only works if the business can generate returns to high quality code, and can write specifications faster than its complement of engineers can implement them. But I’ve been in that position several times, it does happen. Mainly when the core functionality of the product is designed and led by domain experts who are not software engineers. Like if you make software for accountants for instance.
The reasons you give btw don’t give me much consolation. The code leaking thing is very temporary; if you could host cutting edge models on AWS or Azure it wouldn’t be an issue for most companies. If you could self host them it wouldn’t be an issue for almost /any/ companies. The errors thing is a crux. The basic solution to that, I think, is scaling: multishot the problem, rank the solutions, test in every way imaginable, and then for each solved problem optimize your prompts till they can one-shot, keeping a backlog of examples to perform workflow regression testing against.
The style thing is very tractable, AIs love following style instructions.
The big moment for me was realizing that while each AI’s context window is limited, within that window you can ask LOTS of different questions and expect a pretty good answer. So you ask questions that compress the information in the window for the purpose of your problem (llm’s are pretty darn good at summarizing), and keep doing that until you have enough context to solve the problem.