Aren’t you kind of preaching to the choir? Who involved in AI alignment is actually giving advice like this?
Wouldn’t the median respondent tell A and B something like “go start participating at the state of the art by reading and publishing on the Alignment Forum and by reading, reproducing, and publishing AI/ML papers, and maybe go apply for jobs at alignment research labs?”
It’s advice that you generally see from LessWrongers and rationality-adjacent people who are not actively working on technical alignment.
I don’t know if that’s true, but it might be. That does not change the fact that there is a lot of “stay realistic”-type advise that you get from people in these circles. I’d wager this type of advice does not generally come from a more lucid view of reality, but rather from (irrationally high) risk aversion.
If I’d summarize this in one sentence: we need to be much more risk-tolerant and signalling-averse if we want a chance at solving the most important problems.
Aren’t you kind of preaching to the choir? Who involved in AI alignment is actually giving advice like this?
Wouldn’t the median respondent tell A and B something like “go start participating at the state of the art by reading and publishing on the Alignment Forum and by reading, reproducing, and publishing AI/ML papers, and maybe go apply for jobs at alignment research labs?”
It’s advice that you generally see from LessWrongers and rationality-adjacent people who are not actively working on technical alignment.
I don’t know if that’s true, but it might be. That does not change the fact that there is a lot of “stay realistic”-type advise that you get from people in these circles. I’d wager this type of advice does not generally come from a more lucid view of reality, but rather from (irrationally high) risk aversion.
If I’d summarize this in one sentence: we need to be much more risk-tolerant and signalling-averse if we want a chance at solving the most important problems.