No, this isn’t the same. If you wish, you could try to restate what I think the main point of this post is, and I could say if I think that’s accurate. At the moment, it seems to me like you’re misunderstanding what this post is saying.
I think the point of Thomas, Akash, and Olivia’s post is that more people should focus on buying time, because solving the AI safety/alignment problem before capabilities increase to the point of AGI is important, and right now the latter is progressing much faster than the former.
See the first two paragraphs of my post, although I could have made its point and the implicit modeling assumptions more explicitly clear:
Assuming that AI capabilities research continues to outpace AI safety research, the former will eventually result in the most negative externality in history: a significant risk of human extinction. Despite this, a free-rider problem causes AI capabilities research to myopically push forward, both because of market competition and great power competition (e.g., U.S. and China). AI capabilities research is thus analogous to the societal production and usage of fossil fuels, and AI safety research is analogous to green-energy research. We want to scale up and accelerate green-energy research as soon as possible, so that we can halt the negative externalities of fossil fuel use.”
If the “multiplier effects” framing helped you update, then that’s really great! (I also found this framing helpful when I wrote it in this summer at SERI MATS, in the Alignment Game Tree group exercise for John Wentworth’s stream.)
I do think that in order for the “multiplier effects” explanation to hold, it needs to slow down capabilities research relative to safety research. Doing the latter with maximum efficiency is the core phenomenon that proves the optimality of the proposed action, not the former.
That’s fair, but sorry[1] I misstated my intended question. I meant that I was under the impression that you didn’t understand the argument, not that you didn’t understand the action they advocated for.
I understand that your post and this post argue for actions that are similar in effect. And your post is definitely relevant to the question I asked in my first comment, so I appreciate you linking it.
Actually sorry. Asking someone a question that you don’t expect yourself or the person to benefit from is not nice, even if it was just due to careless phrasing. I just wasted your time.
No, this isn’t the same. If you wish, you could try to restate what I think the main point of this post is, and I could say if I think that’s accurate. At the moment, it seems to me like you’re misunderstanding what this post is saying.
I think the point of Thomas, Akash, and Olivia’s post is that more people should focus on buying time, because solving the AI safety/alignment problem before capabilities increase to the point of AGI is important, and right now the latter is progressing much faster than the former.
See the first two paragraphs of my post, although I could have made its point and the implicit modeling assumptions more explicitly clear:
“AI capabilities research seems to be substantially outpacing AI safety research. It is most likely true that successfully solving the AI alignment problem before the successful development of AGI is critical for the continued survival and thriving of humanity.
Assuming that AI capabilities research continues to outpace AI safety research, the former will eventually result in the most negative externality in history: a significant risk of human extinction. Despite this, a free-rider problem causes AI capabilities research to myopically push forward, both because of market competition and great power competition (e.g., U.S. and China). AI capabilities research is thus analogous to the societal production and usage of fossil fuels, and AI safety research is analogous to green-energy research. We want to scale up and accelerate green-energy research as soon as possible, so that we can halt the negative externalities of fossil fuel use.”
If the “multiplier effects” framing helped you update, then that’s really great! (I also found this framing helpful when I wrote it in this summer at SERI MATS, in the Alignment Game Tree group exercise for John Wentworth’s stream.)
I do think that in order for the “multiplier effects” explanation to hold, it needs to slow down capabilities research relative to safety research. Doing the latter with maximum efficiency is the core phenomenon that proves the optimality of the proposed action, not the former.
That’s fair, but sorry[1] I misstated my intended question. I meant that I was under the impression that you didn’t understand the argument, not that you didn’t understand the action they advocated for.
I understand that your post and this post argue for actions that are similar in effect. And your post is definitely relevant to the question I asked in my first comment, so I appreciate you linking it.
Actually sorry. Asking someone a question that you don’t expect yourself or the person to benefit from is not nice, even if it was just due to careless phrasing. I just wasted your time.