Even in computer science, everyone is promoting the idea that you have to learn to code to become a code worker, while automation tools are advancing at a rapid pace.
I still think it’s quite safe to assume that you will have to learn at least how to read code and write pseudo-code to become a code worker. I previously argued here that the average person is really really terrible at programming, and an automation tools isn’t going to help someone who doesn’t even know what an algorithm is. Even if you have a fantastic tool that produces 99% correct code from scratch, that 1% of wrong code is still sufficient to cause terrible failures, and you have to know what you are doing in order to detect which 1% to fix.
I just read your linked post. In the comments someone proposes the idea that computing will migrate to the next level of abstraction. This is the idea I was quoting in my post, that there will be fewer hackers, very good at tech, and more idea creators who will run IAs without worrying about what’s going on under the hood. I agree with your point that 1% error can be fatal in any program and that what is coded by an AI should be checked before implementing the code on multiple machines.
Speaking of which, I’m amazed by the fact that Chat-GPT explains in common language most of the code snippets. However, my knowledge in programming is basic and I don’t know if some programming experts managed to make Chat-GPT perplexed by a very technical, very abstract code snippet.
I still think it’s quite safe to assume that you will have to learn at least how to read code and write pseudo-code to become a code worker. I previously argued here that the average person is really really terrible at programming, and an automation tools isn’t going to help someone who doesn’t even know what an algorithm is. Even if you have a fantastic tool that produces 99% correct code from scratch, that 1% of wrong code is still sufficient to cause terrible failures, and you have to know what you are doing in order to detect which 1% to fix.
I just read your linked post. In the comments someone proposes the idea that computing will migrate to the next level of abstraction. This is the idea I was quoting in my post, that there will be fewer hackers, very good at tech, and more idea creators who will run IAs without worrying about what’s going on under the hood.
I agree with your point that 1% error can be fatal in any program and that what is coded by an AI should be checked before implementing the code on multiple machines.
Speaking of which, I’m amazed by the fact that Chat-GPT explains in common language most of the code snippets. However, my knowledge in programming is basic and I don’t know if some programming experts managed to make Chat-GPT perplexed by a very technical, very abstract code snippet.