Medium post from 2019 says “Tesla’s version, however, is 10 times larger than Inception. The number of parameters (weights) in Tesla’s neural network is five times bigger than Inception’s. I expect that Tesla will continue to push the envelope.”
Wolfram says of Inception v3 “Number of layers: 311 | Parameter count: 23,885,392 | Trained size: 97 MB”
Not sure what version of Inception was being compared to Tesla though.
Take with grain of salt but maybe 119m?
Medium post from 2019 says “Tesla’s version, however, is 10 times larger than Inception. The number of parameters (weights) in Tesla’s neural network is five times bigger than Inception’s. I expect that Tesla will continue to push the envelope.”
Wolfram says of Inception v3 “Number of layers: 311 | Parameter count: 23,885,392 | Trained size: 97 MB”
Not sure what version of Inception was being compared to Tesla though.
Thanks!
I wonder whether it would suddenly start working a lot better if they could e.g. make all their nets 1000x bigger...