It turns out, Pytorch’s pseudorandom number generator generates different numbers on different GPUs even if I set the same random seed. Consider the following file do_different_gpus_randn_the_same.py:
Due to this, I am going to generate all pseudorandom numbers on my CPU and then transfer them to GPU for reproducibility’s sake like foo = torch.randn(500, 500, device="cpu").to("cuda").
You’re going to need to do more than that if you want full reproducibility, because GPUs aren’t even deterministic in the first place, and that is big enough to affect DRL/DL results.
Tbh what I want right now is a very weak form of reproducibility. I want the experiments I am doing nowadays to work the same way on my own computer every time. That works for me so far.
It turns out, Pytorch’s pseudorandom number generator generates different numbers on different GPUs even if I set the same random seed. Consider the following file do_different_gpus_randn_the_same.py:
On my system, I get the following for two runs on two different GPUs:
Due to this, I am going to generate all pseudorandom numbers on my CPU and then transfer them to GPU for reproducibility’s sake like
foo = torch.randn(500, 500, device="cpu").to("cuda")
.You’re going to need to do more than that if you want full reproducibility, because GPUs aren’t even deterministic in the first place, and that is big enough to affect DRL/DL results.
Tbh what I want right now is a very weak form of reproducibility. I want the experiments I am doing nowadays to work the same way on my own computer every time. That works for me so far.