Have you ever drived subconsciously on system 1 autopilot while your conscious system 2 is thinking about something else? But then if there is any hint of danger or something out of the ordinary that shifts your complete system 1 attention on driving. Imagine how unsafe/unrobust human drivers would be without the occasional fulll conscious system 2 focus.
The analogy isn’t perfect, but current DL systems are more like system 1 which doesn’t scale well to the edge case scenarios. There are some rare situations that require complex thought chain reasoning over holistic world model knowledge about human and/or animal behavior, common sense few/zero shot reasoning from limited past experience, etc.
So the fat tail situations may be AGI-complete in terms of difficulty, and the costs of failure are also enormous—so mostly option 2.
Have you ever drived subconsciously on system 1 autopilot while your conscious system 2 is thinking about something else? But then if there is any hint of danger or something out of the ordinary that shifts your complete system 1 attention on driving. Imagine how unsafe/unrobust human drivers would be without the occasional fulll conscious system 2 focus.
The analogy isn’t perfect, but current DL systems are more like system 1 which doesn’t scale well to the edge case scenarios. There are some rare situations that require complex thought chain reasoning over holistic world model knowledge about human and/or animal behavior, common sense few/zero shot reasoning from limited past experience, etc.
So the fat tail situations may be AGI-complete in terms of difficulty, and the costs of failure are also enormous—so mostly option 2.