In a video by web developer Joshua Morony, he explains why he won’t rely on GPT-4 for his code, despite its efficiency. Me paraphrasing: In coding, knowing why and how a system was designed, edge cases accounted for, etc. is valuable because it contextualizes future decision-making. But if you allow AI to design your code, you’ll lose that knowledge. And if you plan on prompting AI to develop further, you’ll lack the judgment to direct it most coherently.
Writing
Many have written about how it’s the writing process itself that generates the great ideas comprising a final product. I’ve briefly consolidated those ideas before. This would warn against writing essays with ChatGPT. But it would also apply to, e.g., hiring an editor to compile a book from your own notes, rather than writing it yourself. Outsourcing the labour of selecting words, ordering ideas etc. also offloads the opportunity to generate new ideas and fascinating relationships between them—often much greater ideas than the ones you started with. Of course, your editor may notice all this, but the principal-agent problem may prevent them from working beyond their scope. Even if they went above and beyond, the arrangement still depends on you not caring that the ideas in the book aren’t your own, or that you don’t even understand the ideas in your own book, since you aren’t the one who laboured to earn an intuition about them.
Automating some work is fine. I ran both the above paragraphs through ChatGPT for brevity. But even then, I wrote them beforehand myself, which generated many ideas that I didn’t yet have at the outset of writing, for example the principal-agent consideration. I even ran this paragraph through ChatGPT, but chose not to adopt its summary. And in the process of rewriting it, I thought to include the principal-agent example two sentences ago. If I had just asked ChatGPT (or some editor) to write a decent response to this post, even with general direction, I doubt it would have been as thoughtful as I was. But of course, ChatGPT or an editor could have come up with a comment exactly as thoughtful, right?
Coding
In a video by web developer Joshua Morony, he explains why he won’t rely on GPT-4 for his code, despite its efficiency. Me paraphrasing: In coding, knowing why and how a system was designed, edge cases accounted for, etc. is valuable because it contextualizes future decision-making. But if you allow AI to design your code, you’ll lose that knowledge. And if you plan on prompting AI to develop further, you’ll lack the judgment to direct it most coherently.
Writing
Many have written about how it’s the writing process itself that generates the great ideas comprising a final product. I’ve briefly consolidated those ideas before. This would warn against writing essays with ChatGPT. But it would also apply to, e.g., hiring an editor to compile a book from your own notes, rather than writing it yourself. Outsourcing the labour of selecting words, ordering ideas etc. also offloads the opportunity to generate new ideas and fascinating relationships between them—often much greater ideas than the ones you started with. Of course, your editor may notice all this, but the principal-agent problem may prevent them from working beyond their scope. Even if they went above and beyond, the arrangement still depends on you not caring that the ideas in the book aren’t your own, or that you don’t even understand the ideas in your own book, since you aren’t the one who laboured to earn an intuition about them.
Automating some work is fine. I ran both the above paragraphs through ChatGPT for brevity. But even then, I wrote them beforehand myself, which generated many ideas that I didn’t yet have at the outset of writing, for example the principal-agent consideration. I even ran this paragraph through ChatGPT, but chose not to adopt its summary. And in the process of rewriting it, I thought to include the principal-agent example two sentences ago. If I had just asked ChatGPT (or some editor) to write a decent response to this post, even with general direction, I doubt it would have been as thoughtful as I was. But of course, ChatGPT or an editor could have come up with a comment exactly as thoughtful, right?