Cheap practical sensors (cameras and, perhaps, radars) more or less require (aligned) AGI for safe operation
Better 3d sensors (lidars), which could, in theory, enable safe driving with existing control theory approaches, are still expensive, impaired by weather and, possibly, interference from other cars with similar sensors, i.e. impractical
No references, but can expand on reasoning if needed
I don’t think that self driving cars is AGI complete problem, but I also have not though a lot about this question. I would appreciate to hear your reasoning why you think this is the case. Or maybe I misunderstood you? In which case I’d appreciate a clarification.
What I meant is self driving *safely* (i.e. at least somewhat safer than humans do currently, including all the edge cases) might be an AGI-complete problem, since:
We know it’s possible for humans
We don’t really know how to provide safety guarantees in the sense of conventional high-safety systems for current NN architectures
Driving safely with cameras likely requires having considerable insight into a lot of societal/game-theoretic issues related to infrastructure and other driver behaviors (e.g. in some cases drivers need to guess a reasonable intent behind incomplete infrastructure or other driver actions, where determining what’s reasonable is the difficult part)
In contrast to this, if we have precise and reliable enough 3d sensors, we can relegate safety to normal physics-based non-NN controllers and safety programming techniques, which we already know how to work with. Problems with such sensors are currently cost and weather resistance
My current hypothesis is:
Cheap practical sensors (cameras and, perhaps, radars) more or less require (aligned) AGI for safe operation
Better 3d sensors (lidars), which could, in theory, enable safe driving with existing control theory approaches, are still expensive, impaired by weather and, possibly, interference from other cars with similar sensors, i.e. impractical
No references, but can expand on reasoning if needed
I don’t think that self driving cars is AGI complete problem, but I also have not though a lot about this question. I would appreciate to hear your reasoning why you think this is the case. Or maybe I misunderstood you? In which case I’d appreciate a clarification.
What I meant is self driving *safely* (i.e. at least somewhat safer than humans do currently, including all the edge cases) might be an AGI-complete problem, since:
We know it’s possible for humans
We don’t really know how to provide safety guarantees in the sense of conventional high-safety systems for current NN architectures
Driving safely with cameras likely requires having considerable insight into a lot of societal/game-theoretic issues related to infrastructure and other driver behaviors (e.g. in some cases drivers need to guess a reasonable intent behind incomplete infrastructure or other driver actions, where determining what’s reasonable is the difficult part)
In contrast to this, if we have precise and reliable enough 3d sensors, we can relegate safety to normal physics-based non-NN controllers and safety programming techniques, which we already know how to work with. Problems with such sensors are currently cost and weather resistance
I don’t think computer vision has progressed enough for a good-robust 3d representation of the world (from cameras).