However, many realistic systems are chaotic and become unpredictable at some finite horizon.[4] At that point, even sophisticated agents cannot predict better than baseline heuristics, which require only a bounded level of skill.
It seems like I could rephrase that claim as “Humans are close to literal optimal performance on long term strategic tasks. Even a Jupiter brain couldn’t do much better at being a CEO than a human, because CEOs are doing strategy in chaotic domains.” (This may be a stronger claim than the one you’re trying to make, in which case I apologize for straw-manning you.)
That seems clearly false to me.
If nothing else, a human decision-maker’s ability to take in input is severely limited by their reading speed. Jeff Bezos can maybe read and synthesize ~1000 pages a of reports from his company in a day, if he spent all day reading reports. But even then those reports are going to be consolidations produced by other people in the company, highlighting which things are important to pay attention to and compressing many orders of magnitude of detail.
An AI CEO running Amazon could feasibly internalize and synthesize all of the data collected by Amazon directly, down to the detailed user-interaction time-logs for every user. (Or if not literally all the data, then at least many orders of magnitude more. My ballpark estimate is about 7.5 orders of magnitude more info, per unit time.)
I bet there’s tons and tons of exploitable patterns that even a “human level” intelligence would be able to pick up on, if they could only read and remember all of that data. For instance, patterns in user behavior (impossible to notice from the high level summaries, but obvious when you’re watching millions of users directly) which would allow you to run more targeted advertising or more effective price targeting. Or coordination opportunities between separate business units inside of Amazon, that are detectable if only there is any one person that knows what was happening in all of them in high detail.
Is the posit here that those gains could be gotten by short-term, non-autonomous AI systems?
Depending on how we carve things up, that might turn out to be true. But that just seems to mean that the human CEO, in practice, is going to pass off virtually all of the decision-making to those “short term” AI systems.
Either all these short term AI systems are doing analysis and composing reports that a human decision-maker reads and synthesizes to inform a long term strategy, or the human decision-maker is superfluous; the high level strategy is overdetermined by the interactions and analysis of the “short term” AI systems.
In the first case, I’m incredulous that there’s no advantage to having a central decision maker with even 100x the information-processing capacity, much less 1,000,000x the information processing capacity.
(I suppose this is my double crux with the authors? They think that after some threshold, additional information synthesized by the central decision maker is of literally negligible value? That a version of Amazon run by Jeff Bezos who had time to read 100x as much about what is happening in the company would do no better than a version of Amazon that has ordinary human Jeff Bezos?)
And In the second case, we’ve effectively implemented a long term planning AI out of a bunch of short term AI components.
Neither branch of that dilemma provides any safety against AI corporations outcompeting human-run corporations, or against takeover risk.
It seems like I could rephrase that claim as “Humans are close to literal optimal performance on long term strategic tasks. Even a Jupiter brain couldn’t do much better at being a CEO than a human, because CEOs are doing strategy in chaotic domains.” (This may be a stronger claim than the one you’re trying to make, in which case I apologize for straw-manning you.)
That seems clearly false to me.
If nothing else, a human decision-maker’s ability to take in input is severely limited by their reading speed. Jeff Bezos can maybe read and synthesize ~1000 pages a of reports from his company in a day, if he spent all day reading reports. But even then those reports are going to be consolidations produced by other people in the company, highlighting which things are important to pay attention to and compressing many orders of magnitude of detail.
An AI CEO running Amazon could feasibly internalize and synthesize all of the data collected by Amazon directly, down to the detailed user-interaction time-logs for every user. (Or if not literally all the data, then at least many orders of magnitude more. My ballpark estimate is about 7.5 orders of magnitude more info, per unit time.)
I bet there’s tons and tons of exploitable patterns that even a “human level” intelligence would be able to pick up on, if they could only read and remember all of that data. For instance, patterns in user behavior (impossible to notice from the high level summaries, but obvious when you’re watching millions of users directly) which would allow you to run more targeted advertising or more effective price targeting. Or coordination opportunities between separate business units inside of Amazon, that are detectable if only there is any one person that knows what was happening in all of them in high detail.
Is the posit here that those gains could be gotten by short-term, non-autonomous AI systems?
Depending on how we carve things up, that might turn out to be true. But that just seems to mean that the human CEO, in practice, is going to pass off virtually all of the decision-making to those “short term” AI systems.
Either all these short term AI systems are doing analysis and composing reports that a human decision-maker reads and synthesizes to inform a long term strategy, or the human decision-maker is superfluous; the high level strategy is overdetermined by the interactions and analysis of the “short term” AI systems.
In the first case, I’m incredulous that there’s no advantage to having a central decision maker with even 100x the information-processing capacity, much less 1,000,000x the information processing capacity.
(I suppose this is my double crux with the authors? They think that after some threshold, additional information synthesized by the central decision maker is of literally negligible value? That a version of Amazon run by Jeff Bezos who had time to read 100x as much about what is happening in the company would do no better than a version of Amazon that has ordinary human Jeff Bezos?)
And In the second case, we’ve effectively implemented a long term planning AI out of a bunch of short term AI components.
Neither branch of that dilemma provides any safety against AI corporations outcompeting human-run corporations, or against takeover risk.