The letter isn’t perfect, but the main ask is worthwhile as you said. Coordination is hard, stakes are very high and time may be short, so I think it is good to support these efforts if they are in the ballpark of something you agree with.
I don’t agree with the recommendation, so I don’t think I should sign my name to it.
To describe a concrete bad thing that may happen: suppose the letter is successful and then there is a pause. Suppose a bunch of AI companies agree to some protocols that they say that these protocols “ensure that systems adhering to them are safe beyond a reasonable doubt”. If I (or another signatory) is then to say “But I don’t think that any such protocols exist” I think they’d be in their right to say “Then why on Earth did you sign this letter saying that we could find them within 6 months?” and then not trust me again to mean the things I say publicly.
The letter says to pause for at least 6 months, not exactly 6 months.
So anyone who doesn’t believe that protocols exist to ensure the safety of more capable AI systems shouldn’t avoid signing the letter for that reason, because the letter can be interpreted as supporting an indefinite pause in that case.
I am concerned about some other parts of it, that seem to imbue a feeling of “trust in government” that I don’t share, and I am concerned that if this letter is successful then governments will get involved in a pretty indiscriminate and corrupt way and then everything will get worse; but my concern is somewhat vague and hard to pin down.
I think it’d be good for me to sleep on it, and see if it seems so bad to sign on to the next time I see it.
Such decisions must not be delegated to unelected tech leaders.
I don’t agree with the clear implication that the problem with tech leaders is that they weren’t elected. I commonly think their judgment is better than people who are elected and in government. I think competent and elected people are best, but given the choice between only competent or only elected (in the current shitty electoral systems of the UK and the US that I am familiar with), I think I prefer competent.
If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
I don’t think this threat is very good. Firstly, it seems a bit empty. This is not the government speaking, I don’t know that FLI is in a position to make the government do this. Secondly, it doesn’t feel like… it feels closer to just causing uncontrollable chaos than it does to doing anything sensible. Maybe it works, but I haven’t seen any arguments that governments won’t just become pretty corrupt and mess everything up if given pretty vague yet significant power. I would much rather the current status quo where I think, in principle, you can have a conversation with the relevant decision-makers and have arguments with them, rather than surrender to the government bureaucracies where pretty commonly there is nobody with any power to do anything differently, and any free power is being competed over by the biggest possible factions in the country, and also a lot of the unaccountable decision-makers are pretty corrupt (cf. Patrick McKenzie and Zvi’s writing about how Covid was dealt with), and where close to literally zero people understand how the systems work or the arguments for existential risks here.
I think people writing about realistic and concrete stories of how this could go well would change my mind here, but I don’t want to put my name to this threat, seems overall like reducing civilization’s agency in the matter, and I wouldn’t currently take this action myself.
I think a bunch of the rest of the letter is pretty good, I like the AI summer and AI fall bit at the end (it is tasteful in relation to “AI Winter”), I broadly like the proposal to redirect AI R&D to focus on the list of things like interpretable and aligned. A bunch of the other sentences seem true, I do think this was pretty well written overall, but the government stuff I’m not on-board with, and I haven’t seen any persuasive arguments for it.
I’m sticking with my position of not-signing this.
and where [in the government] close to literally zero people understand how the systems work or the arguments for existential risks here.
Just want to flag that I’m pretty sure this isn’t true anymore. At least a few important people in the US government (and possibly many) have now taken this course . I am still in progress on my technical review of the course for AIGS Canada, but my take so far is that it provides a good education on relevant aspects of AI for a non-technical audience and also focuses quite a bit on AI existential risk issues.
(I know this only one point out of many you made but I wanted to respond to it when I spotted it and had time.)
Yep, it seems to good to me to respond to just one point that you disagreed with, definitely positive to do so relative to responding to none :)
I genuinely have uncertainty here, I know there were a bunch of folks at CSET who understood some of the args, I’m not sure whether/what roles they have in Government, I think of many of them as being in “policy think tanks” that are outside of government. Matheny was in the White House for a while but now he runs RAND; if he were still there I would be wrong and there would be at least one person who I believe groks the arguments and how a neural net works.
Most of my current probability mass is on literally 100% of elected officials do not understand the arguments or how a neural net works, but I acknowledge that they’re not the only people involved in passing legislation/regulation.
Each 6 months pause costs all the knowledge and potential benefits of less stupid models (and prevents anyone from discovering their properties which may or may not be as bad as feared). Each pause narrows the lead with less ethical competitors.
To state the obvious, pause narrows the lead with less ethical competitors only if pause is not enforced against less ethical competitors. I don’t think anyone is in favor of unenforced pause: that would be indeed stupid, as the basic game theory says.
My impression is that we disagree on how feasible it is to enforce the pause. In my opinion, at the moment, it is pretty feasible, because there simply are not so many competitors. Doing an LLM training run is a rare capability now. Things are fragile and I am in fact unsure whether it would be feasible next year.
How would the pause be enforced against foreign governments? We can’t stop them from building nukes or starting wars or mass imprisoning their own people or mass murder or...
How would it be enforced against foreign and domestic companies working under the umbrella of the government?
How would the pause be enforced at all? No incident has actually happened yet, what are the odds the us government passes legislation about a danger that hasn’t even been seen?
I think you can support a certain policy without putting your name to a flawed argument for that policy. And indeed ensuring that typical arguments for your policy are high-quality is a forrm of support.
The letter isn’t perfect, but the main ask is worthwhile as you said. Coordination is hard, stakes are very high and time may be short, so I think it is good to support these efforts if they are in the ballpark of something you agree with.
I don’t agree with the recommendation, so I don’t think I should sign my name to it.
To describe a concrete bad thing that may happen: suppose the letter is successful and then there is a pause. Suppose a bunch of AI companies agree to some protocols that they say that these protocols “ensure that systems adhering to them are safe beyond a reasonable doubt”. If I (or another signatory) is then to say “But I don’t think that any such protocols exist” I think they’d be in their right to say “Then why on Earth did you sign this letter saying that we could find them within 6 months?” and then not trust me again to mean the things I say publicly.
The letter says to pause for at least 6 months, not exactly 6 months.
So anyone who doesn’t believe that protocols exist to ensure the safety of more capable AI systems shouldn’t avoid signing the letter for that reason, because the letter can be interpreted as supporting an indefinite pause in that case.
Oh, I didn’t read that correctly. Good point.
I am concerned about some other parts of it, that seem to imbue a feeling of “trust in government” that I don’t share, and I am concerned that if this letter is successful then governments will get involved in a pretty indiscriminate and corrupt way and then everything will get worse; but my concern is somewhat vague and hard to pin down.
I think it’d be good for me to sleep on it, and see if it seems so bad to sign on to the next time I see it.
I’ve slept, and now looked it over again.
I don’t agree with the clear implication that the problem with tech leaders is that they weren’t elected. I commonly think their judgment is better than people who are elected and in government. I think competent and elected people are best, but given the choice between only competent or only elected (in the current shitty electoral systems of the UK and the US that I am familiar with), I think I prefer competent.
I don’t think this threat is very good. Firstly, it seems a bit empty. This is not the government speaking, I don’t know that FLI is in a position to make the government do this. Secondly, it doesn’t feel like… it feels closer to just causing uncontrollable chaos than it does to doing anything sensible. Maybe it works, but I haven’t seen any arguments that governments won’t just become pretty corrupt and mess everything up if given pretty vague yet significant power. I would much rather the current status quo where I think, in principle, you can have a conversation with the relevant decision-makers and have arguments with them, rather than surrender to the government bureaucracies where pretty commonly there is nobody with any power to do anything differently, and any free power is being competed over by the biggest possible factions in the country, and also a lot of the unaccountable decision-makers are pretty corrupt (cf. Patrick McKenzie and Zvi’s writing about how Covid was dealt with), and where close to literally zero people understand how the systems work or the arguments for existential risks here.
I think people writing about realistic and concrete stories of how this could go well would change my mind here, but I don’t want to put my name to this threat, seems overall like reducing civilization’s agency in the matter, and I wouldn’t currently take this action myself.
I think a bunch of the rest of the letter is pretty good, I like the AI summer and AI fall bit at the end (it is tasteful in relation to “AI Winter”), I broadly like the proposal to redirect AI R&D to focus on the list of things like interpretable and aligned. A bunch of the other sentences seem true, I do think this was pretty well written overall, but the government stuff I’m not on-board with, and I haven’t seen any persuasive arguments for it.
I’m sticking with my position of not-signing this.
Just want to flag that I’m pretty sure this isn’t true anymore. At least a few important people in the US government (and possibly many) have now taken this course . I am still in progress on my technical review of the course for AIGS Canada, but my take so far is that it provides a good education on relevant aspects of AI for a non-technical audience and also focuses quite a bit on AI existential risk issues.
(I know this only one point out of many you made but I wanted to respond to it when I spotted it and had time.)
Yep, it seems to good to me to respond to just one point that you disagreed with, definitely positive to do so relative to responding to none :)
I genuinely have uncertainty here, I know there were a bunch of folks at CSET who understood some of the args, I’m not sure whether/what roles they have in Government, I think of many of them as being in “policy think tanks” that are outside of government. Matheny was in the White House for a while but now he runs RAND; if he were still there I would be wrong and there would be at least one person who I believe groks the arguments and how a neural net works.
Most of my current probability mass is on literally 100% of elected officials do not understand the arguments or how a neural net works, but I acknowledge that they’re not the only people involved in passing legislation/regulation.
Each 6 months pause costs all the knowledge and potential benefits of less stupid models (and prevents anyone from discovering their properties which may or may not be as bad as feared). Each pause narrows the lead with less ethical competitors.
To state the obvious, pause narrows the lead with less ethical competitors only if pause is not enforced against less ethical competitors. I don’t think anyone is in favor of unenforced pause: that would be indeed stupid, as the basic game theory says.
My impression is that we disagree on how feasible it is to enforce the pause. In my opinion, at the moment, it is pretty feasible, because there simply are not so many competitors. Doing an LLM training run is a rare capability now. Things are fragile and I am in fact unsure whether it would be feasible next year.
How would the pause be enforced against foreign governments? We can’t stop them from building nukes or starting wars or mass imprisoning their own people or mass murder or...
How would it be enforced against foreign and domestic companies working under the umbrella of the government?
How would the pause be enforced at all? No incident has actually happened yet, what are the odds the us government passes legislation about a danger that hasn’t even been seen?
I think you can support a certain policy without putting your name to a flawed argument for that policy. And indeed ensuring that typical arguments for your policy are high-quality is a forrm of support.