There are mechanistic reasons for humanity’s “Sharp Left Turn” with respect to evolution. Humans were bottlenecked by knowledge transfer between new generations, and the cultural revolution allowed us to share our lifetime learnings with the next generation instead of waiting on the slow process of natural selection.
Current AI development is not bottlenecked in the same way and, therefore, is highly unlikely to get a sharp left turn for the same reason. Ultimately, evolution analogies can lead to bad unconscious assumptions with no rigorous mechanistic understanding. Instead of using evolution to argue for a Sharp Left Turn, we should instead look for arguments that are mechanistically specific to current AI development because we are much less likely to make confused mistakes that unconsciously rely on human evolution assumptions.
AI may still suffer from a fast takeoff (through AI driving capabilities research or iteratively refining it’s training data), but for AI-specific reasons so we should be paying attention to that kind of fast takeoff might happen and how to deal with it.
Pretty much. Though I’d call it a “fast takeoff” instead of “sharp left turn” because I think “sharp left turn” is supposed to have connotations beyond “fast takeoff”, e.g., “capabilities end up generalizing further than alignment”.
Right, you are saying evolution doesn’t provide evidence for AI capabilities generalizing further than alignment, but then only consider the fast takeoff part of the SLT to be the concern. I know you have stated reasons why alignment would generalize further than capabilities, but do you not think an SLT-like scenario could occur in the two capability jump scenarios you listed?
Here’s my takeaway:
There are mechanistic reasons for humanity’s “Sharp Left Turn” with respect to evolution. Humans were bottlenecked by knowledge transfer between new generations, and the cultural revolution allowed us to share our lifetime learnings with the next generation instead of waiting on the slow process of natural selection.
Current AI development is not bottlenecked in the same way and, therefore, is highly unlikely to get a sharp left turn for the same reason. Ultimately, evolution analogies can lead to bad unconscious assumptions with no rigorous mechanistic understanding. Instead of using evolution to argue for a Sharp Left Turn, we should instead look for arguments that are mechanistically specific to current AI development because we are much less likely to make confused mistakes that unconsciously rely on human evolution assumptions.
AI may still suffer from a fast takeoff (through AI driving capabilities research or iteratively refining it’s training data), but for AI-specific reasons so we should be paying attention to that kind of fast takeoff might happen and how to deal with it.
Edited after Quintin’s response.
Pretty much. Though I’d call it a “fast takeoff” instead of “sharp left turn” because I think “sharp left turn” is supposed to have connotations beyond “fast takeoff”, e.g., “capabilities end up generalizing further than alignment”.
Right, you are saying evolution doesn’t provide evidence for AI capabilities generalizing further than alignment, but then only consider the fast takeoff part of the SLT to be the concern. I know you have stated reasons why alignment would generalize further than capabilities, but do you not think an SLT-like scenario could occur in the two capability jump scenarios you listed?