There were two bits of evidence I used to infer this.
“If I’m not sure what it is, how can I remember what it was doing?” The car wasn’t sure whether Herzberg and her bike were a “Vehicle”, “Bicycle”, “Unknown”, or “Other”, and kept switching between classifications. This shouldn’t have been a major issue, except that with each switch it discarded past observations. Had the car maintained this history it would have seen that some sort of large object was progressing across the street on a collision course, and had plenty of time to stop.
The first bit (above) is that the car throws away its past observations. The second bit of evidence is a consequence of the first.
“If we see a problem, wait and hope it goes away”. The car was programmed to, when it determined things were very wrong, wait one second. Literally. Not even gently apply the brakes. This is absolutely nuts. If your system has so many false alarms that you need to include this kind of hack to keep it from acting erratically, you are not ready to test on public roads.
Humans have to write ugly hacks like this when when your system isn’t architected bottom-up to handle things like the flow of time. A machine learning system designed to handle time series data should never have human beings in the loop this low down the ladder of abstraction.
There were two bits of evidence I used to infer this.
The first bit (above) is that the car throws away its past observations. The second bit of evidence is a consequence of the first.
Humans have to write ugly hacks like this when when your system isn’t architected bottom-up to handle things like the flow of time. A machine learning system designed to handle time series data should never have human beings in the loop this low down the ladder of abstraction.