I think it’s mostly (3). Not because AI safety is an outlier, but because of how much work people had to do to come to grips with Moravec’s paradox.
If you take someone clever and throw them at the problem of GAI, the first thing they’ll think of is something doing logical reasoning, able to follow natural language commands. Their intuition will be based on giving orders to a human. It takes a lot of work to supplant that intuition with something more mechanistic.
Like, it seems obvious to us now that building something that takes natural language commands and actually does what we mean is a very hard problem. But this is exactly a Moravec’s paradox situation, because knowing what people mean is mostly effortless and unconscious to us.
I think it’s mostly (3). Not because AI safety is an outlier, but because of how much work people had to do to come to grips with Moravec’s paradox.
If you take someone clever and throw them at the problem of GAI, the first thing they’ll think of is something doing logical reasoning, able to follow natural language commands. Their intuition will be based on giving orders to a human. It takes a lot of work to supplant that intuition with something more mechanistic.
Like, it seems obvious to us now that building something that takes natural language commands and actually does what we mean is a very hard problem. But this is exactly a Moravec’s paradox situation, because knowing what people mean is mostly effortless and unconscious to us.