The programmer spends $20,000 on compute from google. They claim to be working on an AI project and give no more details. They upload a compiled program and run it.
Even easier than you think. TRC will give you a lot more than $20k of TPU compute for free after a 5-minute application. All you need is a CC/​working GCP account to cover incidentals like bucket storage/​bandwidth (maybe a few hundred a month for pretty intense use). One of the greatest steals in DL.
TRC also has essentially no monitoring capability, only the vaguest metric of TPU usage. (This led to the funny situation when Shawn Presser & myself were training an extremely wide context window GPT-2 which needed far more RAM than TPUs individually have; so, because the TPUs are attached to a chonky CPU with like 200+ GB RAM, we were simply running in the CPU RAM. TRC was mystified because we had all these TPUs locked up, logging as idle, and apparently doing absolutely nothing. I am told that when Shawn explained what was going on to a Googler, they were horrified at our perversion of the hardware. :)
Even easier than you think. TRC will give you a lot more than $20k of TPU compute for free after a 5-minute application. All you need is a CC/​working GCP account to cover incidentals like bucket storage/​bandwidth (maybe a few hundred a month for pretty intense use). One of the greatest steals in DL.
TRC also has essentially no monitoring capability, only the vaguest metric of TPU usage. (This led to the funny situation when Shawn Presser & myself were training an extremely wide context window GPT-2 which needed far more RAM than TPUs individually have; so, because the TPUs are attached to a chonky CPU with like 200+ GB RAM, we were simply running in the CPU RAM. TRC was mystified because we had all these TPUs locked up, logging as idle, and apparently doing absolutely nothing. I am told that when Shawn explained what was going on to a Googler, they were horrified at our perversion of the hardware. :)