Such decisions must not be delegated to unelected tech leaders.
I don’t agree with the clear implication that the problem with tech leaders is that they weren’t elected. I commonly think their judgment is better than people who are elected and in government. I think competent and elected people are best, but given the choice between only competent or only elected (in the current shitty electoral systems of the UK and the US that I am familiar with), I think I prefer competent.
If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
I don’t think this threat is very good. Firstly, it seems a bit empty. This is not the government speaking, I don’t know that FLI is in a position to make the government do this. Secondly, it doesn’t feel like… it feels closer to just causing uncontrollable chaos than it does to doing anything sensible. Maybe it works, but I haven’t seen any arguments that governments won’t just become pretty corrupt and mess everything up if given pretty vague yet significant power. I would much rather the current status quo where I think, in principle, you can have a conversation with the relevant decision-makers and have arguments with them, rather than surrender to the government bureaucracies where pretty commonly there is nobody with any power to do anything differently, and any free power is being competed over by the biggest possible factions in the country, and also a lot of the unaccountable decision-makers are pretty corrupt (cf. Patrick McKenzie and Zvi’s writing about how Covid was dealt with), and where close to literally zero people understand how the systems work or the arguments for existential risks here.
I think people writing about realistic and concrete stories of how this could go well would change my mind here, but I don’t want to put my name to this threat, seems overall like reducing civilization’s agency in the matter, and I wouldn’t currently take this action myself.
I think a bunch of the rest of the letter is pretty good, I like the AI summer and AI fall bit at the end (it is tasteful in relation to “AI Winter”), I broadly like the proposal to redirect AI R&D to focus on the list of things like interpretable and aligned. A bunch of the other sentences seem true, I do think this was pretty well written overall, but the government stuff I’m not on-board with, and I haven’t seen any persuasive arguments for it.
I’m sticking with my position of not-signing this.
and where [in the government] close to literally zero people understand how the systems work or the arguments for existential risks here.
Just want to flag that I’m pretty sure this isn’t true anymore. At least a few important people in the US government (and possibly many) have now taken this course . I am still in progress on my technical review of the course for AIGS Canada, but my take so far is that it provides a good education on relevant aspects of AI for a non-technical audience and also focuses quite a bit on AI existential risk issues.
(I know this only one point out of many you made but I wanted to respond to it when I spotted it and had time.)
Yep, it seems to good to me to respond to just one point that you disagreed with, definitely positive to do so relative to responding to none :)
I genuinely have uncertainty here, I know there were a bunch of folks at CSET who understood some of the args, I’m not sure whether/what roles they have in Government, I think of many of them as being in “policy think tanks” that are outside of government. Matheny was in the White House for a while but now he runs RAND; if he were still there I would be wrong and there would be at least one person who I believe groks the arguments and how a neural net works.
Most of my current probability mass is on literally 100% of elected officials do not understand the arguments or how a neural net works, but I acknowledge that they’re not the only people involved in passing legislation/regulation.
I’ve slept, and now looked it over again.
I don’t agree with the clear implication that the problem with tech leaders is that they weren’t elected. I commonly think their judgment is better than people who are elected and in government. I think competent and elected people are best, but given the choice between only competent or only elected (in the current shitty electoral systems of the UK and the US that I am familiar with), I think I prefer competent.
I don’t think this threat is very good. Firstly, it seems a bit empty. This is not the government speaking, I don’t know that FLI is in a position to make the government do this. Secondly, it doesn’t feel like… it feels closer to just causing uncontrollable chaos than it does to doing anything sensible. Maybe it works, but I haven’t seen any arguments that governments won’t just become pretty corrupt and mess everything up if given pretty vague yet significant power. I would much rather the current status quo where I think, in principle, you can have a conversation with the relevant decision-makers and have arguments with them, rather than surrender to the government bureaucracies where pretty commonly there is nobody with any power to do anything differently, and any free power is being competed over by the biggest possible factions in the country, and also a lot of the unaccountable decision-makers are pretty corrupt (cf. Patrick McKenzie and Zvi’s writing about how Covid was dealt with), and where close to literally zero people understand how the systems work or the arguments for existential risks here.
I think people writing about realistic and concrete stories of how this could go well would change my mind here, but I don’t want to put my name to this threat, seems overall like reducing civilization’s agency in the matter, and I wouldn’t currently take this action myself.
I think a bunch of the rest of the letter is pretty good, I like the AI summer and AI fall bit at the end (it is tasteful in relation to “AI Winter”), I broadly like the proposal to redirect AI R&D to focus on the list of things like interpretable and aligned. A bunch of the other sentences seem true, I do think this was pretty well written overall, but the government stuff I’m not on-board with, and I haven’t seen any persuasive arguments for it.
I’m sticking with my position of not-signing this.
Just want to flag that I’m pretty sure this isn’t true anymore. At least a few important people in the US government (and possibly many) have now taken this course . I am still in progress on my technical review of the course for AIGS Canada, but my take so far is that it provides a good education on relevant aspects of AI for a non-technical audience and also focuses quite a bit on AI existential risk issues.
(I know this only one point out of many you made but I wanted to respond to it when I spotted it and had time.)
Yep, it seems to good to me to respond to just one point that you disagreed with, definitely positive to do so relative to responding to none :)
I genuinely have uncertainty here, I know there were a bunch of folks at CSET who understood some of the args, I’m not sure whether/what roles they have in Government, I think of many of them as being in “policy think tanks” that are outside of government. Matheny was in the White House for a while but now he runs RAND; if he were still there I would be wrong and there would be at least one person who I believe groks the arguments and how a neural net works.
Most of my current probability mass is on literally 100% of elected officials do not understand the arguments or how a neural net works, but I acknowledge that they’re not the only people involved in passing legislation/regulation.