I anticipate higher, because the PhD gets a sweet certification at the end, and likely more career capital. A thing we don’t currently give alignment researchers, and which would be hard to give since they often believe the world will end very soon, reducing the value of skill building and certifications.
Like, I do think in fact ML PhDs get paid more than alignment researchers, accounting for these benefits.
I anticipate higher, because the PhD gets a sweet certification at the end, and likely more career capital. A thing we don’t currently give alignment researchers, and which would be hard to give since they often believe the world will end very soon, reducing the value of skill building and certifications.
Like, I do think in fact ML PhDs get paid more than alignment researchers, accounting for these benefits.