I’m gonna pass on the question of whether it’s defensible (like you, the thought of giving money to OpenAI makes me uneasy), but I do like the idea of an “Alignment tax”. By general principles one should expect that there is some ideal proportion of money flowing into alignment/regulation efforts vs. AI development that makes the future maximally safe. So steering towards that seems like the right thing to do.
I’m gonna pass on the question of whether it’s defensible (like you, the thought of giving money to OpenAI makes me uneasy), but I do like the idea of an “Alignment tax”. By general principles one should expect that there is some ideal proportion of money flowing into alignment/regulation efforts vs. AI development that makes the future maximally safe. So steering towards that seems like the right thing to do.