Thanks, as far as I can this is a mix of critiques of strategic approach (fair enough), about communication style (fair enough), and partial misunderstandings of the technical arguments.
instead of a succession of events which need to go your way, I think you should aim for incremental marginal gains. There is no cost-effectiveness analysis…
I agree that we should not get hung up on a succession of events to go a certain way. IMO, we need to get good at simultaneously broadcasting our concerns in a way that’s relatable to other concerned communities, and opportunistically look for new collaborations there.
At the same time, local organisers often build up an activist movement by ratcheting up the number of people joining the events and the pressure they put on demanding institutions to make changes. These are basic cheap civil disobedience tactics that have worked for many movements (climate, civil rights, feminist, changing a ruling party, etc). I prefer to go with what has worked, instead of trying to reinvent the wheel based on fragile cost-effectiveness estimates. But if you can think of concrete alternative activities that also have a track record of working, I’m curious to hear.
Your press release is unreadable (poor formatting), and sounds like a conspiracy theory (catchy punchlines, ALL CAPS DEMANDS, alarmist vocabulary and unsubstantiated claims)
I think this is broadly fair. The turnaround time of this press release was short, and I think we should improve on the formatting and give more nuanced explanations next time.
Keep in mind the text is not aimed at you but people more broadly who are feeling concerned and we want to encourage to act. A press release is not a paper. Our press release is more like a call to action – there is a reason to add punchy lines here.
The figures you quote are false (the median from AI Impacts is 5%) or knowingly misleading (the numbers from Existential risk from AI survey are far from robust and as you note, suffer from selection bias)
Let me recheck the AI Impacts paper. Maybe I was ditzy before, in which case, my bad.
As you saw from my commentary above, I was skeptical about using that range of figures in the first place.
You conflate AGI and self-modifying systems
Not sure what you see as the conflation?
AGI, as an autonomous system that would automate many jobs, would necessarily be self-modifying – even in the limited sense of adjusting its internal code/weights on the basis of new inputs.
Your arguments are invalid
The reasoning shared in the press release by my colleague was rather loose, so I more rigorously explained a related set of arguments in this post.
As to whether arguments from point 1 to 6. above are invalid, I haven’t seen you point out inconsistencies in the logic yet, so as it stands you seem to be sharing a personal opinion.
I am appalled to see this was not downvoted into oblivion!
Should I comment on the level of nuance in your writing here? :P
I definitely made a mistake in quickly checking that number shared by colleague.
The 2023 AI Impacts survey shows a mean risk of 14.4% for the question “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?”.
Thanks, as far as I can this is a mix of critiques of strategic approach (fair enough), about communication style (fair enough), and partial misunderstandings of the technical arguments.
I agree that we should not get hung up on a succession of events to go a certain way. IMO, we need to get good at simultaneously broadcasting our concerns in a way that’s relatable to other concerned communities, and opportunistically look for new collaborations there.
At the same time, local organisers often build up an activist movement by ratcheting up the number of people joining the events and the pressure they put on demanding institutions to make changes. These are basic cheap civil disobedience tactics that have worked for many movements (climate, civil rights, feminist, changing a ruling party, etc). I prefer to go with what has worked, instead of trying to reinvent the wheel based on fragile cost-effectiveness estimates. But if you can think of concrete alternative activities that also have a track record of working, I’m curious to hear.
I think this is broadly fair. The turnaround time of this press release was short, and I think we should improve on the formatting and give more nuanced explanations next time.
Keep in mind the text is not aimed at you but people more broadly who are feeling concerned and we want to encourage to act. A press release is not a paper. Our press release is more like a call to action – there is a reason to add punchy lines here.
Let me recheck the AI Impacts paper. Maybe I was ditzy before, in which case, my bad.
As you saw from my commentary above, I was skeptical about using that range of figures in the first place.
Not sure what you see as the conflation?
AGI, as an autonomous system that would automate many jobs, would necessarily be self-modifying – even in the limited sense of adjusting its internal code/weights on the basis of new inputs.
The reasoning shared in the press release by my colleague was rather loose, so I more rigorously explained a related set of arguments in this post.
As to whether arguments from point 1 to 6. above are invalid, I haven’t seen you point out inconsistencies in the logic yet, so as it stands you seem to be sharing a personal opinion.
Should I comment on the level of nuance in your writing here? :P
I definitely made a mistake in quickly checking that number shared by colleague.
The 2023 AI Impacts survey shows a mean risk of 14.4% for the question “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?”.
Whereas the other smaller sample survey gives a median estimate of 30%
I already thought using those two figures as a range did not make sense, but putting a mean and a median in the same range is even more wrong.
Thanks for pointing this out! Let me add a correcting comment above.