This may be the most important question about the path of near-future societal change.
The main problem in analyzing, predicting, or impacting this future is that there are very few pure-bullshit or pure-value jobs or tasks. It’s ALWAYS a mix, and the borders between components of a job are nonlinear and fuzzy. And not in a way that a good classifier would help—it’s based on REALLY complicated multi-agent equilibria, with reinforcements from a lot of directions.
Your bullshit job description is excellent
You send emails and make calls and take meetings and network to support inter-managerial struggles and fulfill paperwork requirements and perform class signaling to make clients and partners feel appreciated.
The key there is “make clients and partners feel appreciated”. That portion is a race. If it fails, some other company gets the business (and the jobs). I argue that there are significant relative measures (races) in EVERY aspect of human interaction, and that this is embedded enough in human nature that it’s unlikely to be eliminated.
[edit, after a bit more thought]
The follow-up question is about when AI is trustworthy (and trusted; cf. lack of corporate internal prediction markets) enough to dissolve some of the races, making the jobs pure-bullshit, and thus eliminating them. That bullshit job stops being funded if the clients and partners make their spending decisions based on AI predictions of performance, not on human trust in employees.
This may be the most important question about the path of near-future societal change.
The main problem in analyzing, predicting, or impacting this future is that there are very few pure-bullshit or pure-value jobs or tasks. It’s ALWAYS a mix, and the borders between components of a job are nonlinear and fuzzy. And not in a way that a good classifier would help—it’s based on REALLY complicated multi-agent equilibria, with reinforcements from a lot of directions.
Your bullshit job description is excellent
The key there is “make clients and partners feel appreciated”. That portion is a race. If it fails, some other company gets the business (and the jobs). I argue that there are significant relative measures (races) in EVERY aspect of human interaction, and that this is embedded enough in human nature that it’s unlikely to be eliminated.
[edit, after a bit more thought]
The follow-up question is about when AI is trustworthy (and trusted; cf. lack of corporate internal prediction markets) enough to dissolve some of the races, making the jobs pure-bullshit, and thus eliminating them. That bullshit job stops being funded if the clients and partners make their spending decisions based on AI predictions of performance, not on human trust in employees.