This is an important case to think about. I think it is understudied. What separates current AIs from the CEO role? And how long will it take? I see three things:
Long term thinking, agency, the ability to remember things, not going crazy in an hour or two. It seems to me like this is all the same problem, in the sense that I think one innovation will solve all of them. This has a lot of effort focused on it. I feel like it’s been a known big problem since GPT-4 and Sydney/Bing, 2 1⁄2 years ago. So, by the Lindy principle, it should be another 2 1⁄2 years until it is solved.
Persuasiveness. I’ve known a few CEOs in my life; they were all more persuasive than average, and one was genuinely unearthly in her ability to convince people of things. LLMs have been steadily increasing in persuasiveness, and are now par-human. So I think scaling will take care of this. Perhaps a year or two?
Experience. I don’t know how much of this can be inculcated with the training data, and how much of it requires actually managing people and products and thinking about what can go wrong. Every CEO has to deal with individual subordinates, customers, and counterparties. How much of the job is learning the ins and outs of those particular people? Or does it suffice to have spent a thousand subjective years reading about a million people? So we may already have this.
(Side point: As an engineer watching CEOs, I am amazed by their ability to take a few scattered hints spread across their very busy days, and assemble them into a theory of what’s going on and what to do about it. They’re willing to take what I would consider intolerably flimsy evidence and act on it. When this doesn’t work, its called “jumping to conclusions”, when it does work it’s called “a genius move”. AIs should be good at this if you turn up their temperature.)
This is an important case to think about. I think it is understudied. What separates current AIs from the CEO role? And how long will it take? I see three things:
Long term thinking, agency, the ability to remember things, not going crazy in an hour or two. It seems to me like this is all the same problem, in the sense that I think one innovation will solve all of them. This has a lot of effort focused on it. I feel like it’s been a known big problem since GPT-4 and Sydney/Bing, 2 1⁄2 years ago. So, by the Lindy principle, it should be another 2 1⁄2 years until it is solved.
Persuasiveness. I’ve known a few CEOs in my life; they were all more persuasive than average, and one was genuinely unearthly in her ability to convince people of things. LLMs have been steadily increasing in persuasiveness, and are now par-human. So I think scaling will take care of this. Perhaps a year or two?
Experience. I don’t know how much of this can be inculcated with the training data, and how much of it requires actually managing people and products and thinking about what can go wrong. Every CEO has to deal with individual subordinates, customers, and counterparties. How much of the job is learning the ins and outs of those particular people? Or does it suffice to have spent a thousand subjective years reading about a million people? So we may already have this.
(Side point: As an engineer watching CEOs, I am amazed by their ability to take a few scattered hints spread across their very busy days, and assemble them into a theory of what’s going on and what to do about it. They’re willing to take what I would consider intolerably flimsy evidence and act on it. When this doesn’t work, its called “jumping to conclusions”, when it does work it’s called “a genius move”. AIs should be good at this if you turn up their temperature.)