Math doesn’t have GOALS. But we constantly give goals to our AIs.
If you use AI every day and are excited about its ability to accomplish useful things, its hard to keep the dangers in mind. I see that in myself.
But that doesn’t mean the dangers are not there.
In most circumstances Tesla’s system is better than human drivers already.
But there’s a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We’d rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)
That influences the legal barriers—we inevitably demand more of the automated system than we do of human drivers.
Finally, liability. Today drivers bear the liability risk for accidents, and pay for insurance to cover it. It seems impossible to justify putting that burden on drivers when drivers aren’t in charge—those who write the algorithms and build the hardware (car manufacturers) will have that burden. And that’s pricey, so manufacturers don’t have great incentive to go there.