Side point: this whole idea is arguably somewhat opposed to what Cal Newport in Deep Work describes as the “any benefit mindset”, i.e. people’s tendency to use tools when they can see any benefit in them (Facebook being one example, as it certainly does come with the benefit of keeping you in touch with people you would otherwise have no connection to), while ignoring the hidden costs of these tools (such as the time/attention they require). I think both ideas are worth to keep in mind when evaluating the usefulness of a tool. Ask yourself both if the usefulness of the tool can be deliberately increased, and if the tool’s benefits are ultimately worth its costs.
I was thinking of a similar point, which is some programmers’ (myself included) tendency to obsess over making small tweaks to update their editor/IDE/shell workflow without really paying attention to whether optimizing that workflow to the nth degree actually saves enough time to make the optimization worthwhile. Similarly, a hypothetical AI might be very useful once you understand how to make the perfect prompt, but the time and effort necessary to figure out how to craft the prompt just right isn’t worth it.
I suspect ChatGPT isn’t quite that narrow, however, and I’ve already seen positive returns to basic experimentation with varying prompts and regenerating answers.
Side point: this whole idea is arguably somewhat opposed to what Cal Newport in Deep Work describes as the “any benefit mindset”, i.e. people’s tendency to use tools when they can see any benefit in them (Facebook being one example, as it certainly does come with the benefit of keeping you in touch with people you would otherwise have no connection to), while ignoring the hidden costs of these tools (such as the time/attention they require). I think both ideas are worth to keep in mind when evaluating the usefulness of a tool. Ask yourself both if the usefulness of the tool can be deliberately increased, and if the tool’s benefits are ultimately worth its costs.
I was thinking of a similar point, which is some programmers’ (myself included) tendency to obsess over making small tweaks to update their editor/IDE/shell workflow without really paying attention to whether optimizing that workflow to the nth degree actually saves enough time to make the optimization worthwhile. Similarly, a hypothetical AI might be very useful once you understand how to make the perfect prompt, but the time and effort necessary to figure out how to craft the prompt just right isn’t worth it.
I suspect ChatGPT isn’t quite that narrow, however, and I’ve already seen positive returns to basic experimentation with varying prompts and regenerating answers.