A friend pointed out on Facebook that Gato uses TPU-v3′s. Not sure why—I thought Google already had v4′s available for internal use a while ago? In any case, the TPU-v4 might potentially help a lot for the latency issue.
Two main options:* It was trained e.g. 1 year ago but published only now* All TPU-v4 very busy with something even more important
They trained it on TPUv3s, however, the robot inference was run on a Geforce RTX 3090 (see section G).
TPUs are mostly designed for data centers and are not really usable for on-device inference.
A friend pointed out on Facebook that Gato uses TPU-v3′s. Not sure why—I thought Google already had v4′s available for internal use a while ago? In any case, the TPU-v4 might potentially help a lot for the latency issue.
Two main options:
* It was trained e.g. 1 year ago but published only now
* All TPU-v4 very busy with something even more important
They trained it on TPUv3s, however, the robot inference was run on a Geforce RTX 3090 (see section G).
TPUs are mostly designed for data centers and are not really usable for on-device inference.