You can sorta—as long as you don’t need specific CUDA features. I’ve generally had good experiences running Github pytorch/tensorflow code on RoCM. Though that’s on a Radeon VII, which isn’t sold anymore and was basically “datacenter GPU for home use”. My impression is that AMD have actually gotten worse recently at supporting deep learning on home GPUs, so maybe this is true nowadays.
You can sorta—as long as you don’t need specific CUDA features. I’ve generally had good experiences running Github pytorch/tensorflow code on RoCM. Though that’s on a Radeon VII, which isn’t sold anymore and was basically “datacenter GPU for home use”. My impression is that AMD have actually gotten worse recently at supporting deep learning on home GPUs, so maybe this is true nowadays.