I think the main feature of AI transition that people around here missed / didn’t adequately foreground is that AI will be worse is better. AI art will be clearly worse than the best human art—maybe even median human art—but will cost pennies on the dollar, and so we will end up with more, worse art everywhere. (It’s like machine-made t-shirts compared to tailored clothes.) AI-enabled surveillance systems will likely look more like shallow understanding of all communication than a single overmind thinking hard about which humans are up to what trouble.
This was even hinted at by talking about human intelligence; this comment is from 2020, but I remember seeing this meme on LW much earlier:
When you think about it, because of the way evolution works, humans are probably hovering right around the bare-minimal level of rationality and intelligence needed to build and sustain civilization. Otherwise, civilization would have happened earlier, to our hominid ancestors.
Similarly, we should expect widespread AI integration at about the bare-minimum level of competence and profitability.
I often think of the MIRI view as focusing on the last AI; I.J. Good’s “last invention that man need ever make.” It seems quite plausible that those will be smarter than the smartest humans, but possibly in a way that we consider very boring. (The smartest calculators are smarter than the smartest humans at arithmetic.) Good uses the idea of ultraintelligence for its logical properties (it fits nicely into a syllogism) rather than its plausibility.
[Thinking about the last AI seems important because choices we make now will determine what state we’re in when we build the last AI, and aligning it is likely categorically different from aligning AI up to that point, so we need to get started now and try to develop in the right directions.]
I think the main feature of AI transition that people around here missed / didn’t adequately foreground is that AI will be worse is better. AI art will be clearly worse than the best human art—maybe even median human art—but will cost pennies on the dollar, and so we will end up with more, worse art everywhere. (It’s like machine-made t-shirts compared to tailored clothes.) AI-enabled surveillance systems will likely look more like shallow understanding of all communication than a single overmind thinking hard about which humans are up to what trouble.
This was even hinted at by talking about human intelligence; this comment is from 2020, but I remember seeing this meme on LW much earlier:
Similarly, we should expect widespread AI integration at about the bare-minimum level of competence and profitability.
I often think of the MIRI view as focusing on the last AI; I.J. Good’s “last invention that man need ever make.” It seems quite plausible that those will be smarter than the smartest humans, but possibly in a way that we consider very boring. (The smartest calculators are smarter than the smartest humans at arithmetic.) Good uses the idea of ultraintelligence for its logical properties (it fits nicely into a syllogism) rather than its plausibility.
[Thinking about the last AI seems important because choices we make now will determine what state we’re in when we build the last AI, and aligning it is likely categorically different from aligning AI up to that point, so we need to get started now and try to develop in the right directions.]