I think your interpretation is fairly uncharitable. If you have further examples of this deceptive pattern from those sympathetic to AI risk I would change my perspective but the speculation in the post plus this example weren’t compelling:
I watched the video and firstly Senator Peters seems to trail off after the quoted part and ends his question by saying “What’s your assessment of how fast this is going and when do you think we may be faced with those more challenging issues?”. So straightforwardly his question is about timelines not about risk as you frame it. Indeed Matheny (after two minutes) literally responds “it’s a really difficult question. I think whether AGI is nearer or farther than thought …” (emphasis different to yours) so makes it likely to me Matheny is expressing uncertainty about timelines, not risk.
Overall I agree that this was an opportunity for Matheny to discuss AI x-risk and plausibly it wasn’t the best use of time to discuss the uncertainty of the situation. But saying this is dishonesty doesn’t seem well supported
No, the question was about whether there are apocalyptic risks and on what timeline we should be concerned about apocalyptic risks.
The questioner used the term ‘apocalyptic’ specifically. Three people answered the question, and the first two both also alluded to ‘apocalyptic’ risks and sort of said that they didn’t really think we need to think about that possibility. Them referring to apocalyptic risks goes to show that it was a key part of what the questioner wanted to understand — to what extent these risks are real and on what timeline we’ll need to react to them. My read is not that Matheny actively misled the speaker, but that he avoided answering, which is “hiding” rather than “lying” (I don’t agree with the OP that they’re identical).
I think the question was unclear so it was more acceptable to not directly address whether there is apocalyptic risk, but I think many people I know would have definitely said “Oh to be clear I totally disagree with the previous two people, there are definitely apocalyptic risks and we are not prepared for them and cannot deal with them after-the-fact (as you just mentioned being concerned about).”
Extra detail on what happened
Everyone who answered explicitly avoided making timeline predictions and instead talked about where they think the policy focus should be.
The first person roughly said “We have many problems with AI right now, let’s focus on addressing those.”
The middle person said the AI problems are all of the sort “people being sent to jail because of an errant ML system”.
Here’s the middle person in full, clearly responding to the question of whether there’s apocalyptic risks to be worried about:
People ask me what keeps me up at night. AGI does not keep me up at night. And the reason why it doesn’t, is because (as Ms Gibbons mentioned) the problems we are likely to face, with the apocalyptic visions of AGI, are the same problems we are already facing right now, with the systems that are already in play. I worry about people being sent to jail because of an errant ML system. Whether you use some fancy AGI to do the same thing, it’s the same problem… My bet is that the harms we’re going to see, as these more powerful systems come online — even with ChatGPT — are no different from the harms we’re seeing right now. So if we focus our efforts and our energies on governance and regulation and guardrails to address the harms we’re seeing right now, they will be able to adjust as the technology improves. I am not worried that what we put in place today will be out of date or out of sync with the new tech. The new tech is like the old tech, just supercharged.
Matheny didn’t disagree with them and didn’t address the question of whether it’s apocalyptic, just said he was uncertain, and then listed the policies he wanted to see: setting standards with 3rd party audits, and governance of hardware supply chain to track it and control that it doesn’t go to places that aren’t democracies.
To not state that you disagree with the last two positions signals that you agree with them, as the absence of your disagreement is evidence of the absence of disagreement. I don’t think Matheny outright said anything false but I think it is a bit misleading to not say “I totally disagree, I think the new tech will be akin to inventing a whole new superintelligent alien species that may kill us all and take over the universe” if something like that is what you believe.
My read is that he was really trying as hard as he could to not address whether there are apocalyptic risks and instead just focus on encouraging the sorts of policies he thought should be implemented.
My read is that he was really trying as hard as he could to not address whether there are apocalyptic risks and instead just focus on encouraging the sorts of policies he thought should be implemented.
Why, though?
Does he know something we don’t? Does he think that if he expresses that those risks are real he’ll lose political capital? People won’t put him or his friends in positions of power, because he’ll be branded as a kook?
Is he just in the habit of side-stepping the weird possibilities?
This looks to me, from the outside, like an unforced error. They were asking the question, about some core beliefs, pretty directly. It seems like it would help if, in every such instance, the EA people who think that the world might be destroyed by AGI in the next 20 years, say that they think that the world might be destroyed by AGI in the next 20 years.
As Ben said, this seems incongruent with the responses that the other two people gave, neither of which talked that much about timelines, but did seem to directly respond to the concern about catastrophic/apocalyptic risk from AGI.
I do agree that it’s plausible that Matheny somehow understood the question differently from the other two people, and interpreted it in a more timelines focused way, though he also heard the other two people talk, which makes that somewhat less likely. I do agree that the question wasn’t asked in the most cogent way.
Thanks for checking this! I mostly agree with all your original comment now (except the first part suggesting it was point blank, but we’re quibbling over definitions at this point), this does seem like a case of intentionally not discussing risk
the recent TIME article saying there’s no trade off between progress and safety
More generally, for having talked to many AI policy/safety members, I can say it’s a very common pattern. At the eve of the FLI open letter, one of the most senior persons in the AI governance & policy X risk community was explaining that it was stupid to write this letter and that it would make future policy efforts much more difficult etc.
I think your interpretation is fairly uncharitable. If you have further examples of this deceptive pattern from those sympathetic to AI risk I would change my perspective but the speculation in the post plus this example weren’t compelling:
I watched the video and firstly Senator Peters seems to trail off after the quoted part and ends his question by saying “What’s your assessment of how fast this is going and when do you think we may be faced with those more challenging issues?”. So straightforwardly his question is about timelines not about risk as you frame it. Indeed Matheny (after two minutes) literally responds “it’s a really difficult question. I think whether AGI is nearer or farther than thought …” (emphasis different to yours) so makes it likely to me Matheny is expressing uncertainty about timelines, not risk.
Overall I agree that this was an opportunity for Matheny to discuss AI x-risk and plausibly it wasn’t the best use of time to discuss the uncertainty of the situation. But saying this is dishonesty doesn’t seem well supported
No, the question was about whether there are apocalyptic risks and on what timeline we should be concerned about apocalyptic risks.
The questioner used the term ‘apocalyptic’ specifically. Three people answered the question, and the first two both also alluded to ‘apocalyptic’ risks and sort of said that they didn’t really think we need to think about that possibility. Them referring to apocalyptic risks goes to show that it was a key part of what the questioner wanted to understand — to what extent these risks are real and on what timeline we’ll need to react to them. My read is not that Matheny actively misled the speaker, but that he avoided answering, which is “hiding” rather than “lying” (I don’t agree with the OP that they’re identical).
I think the question was unclear so it was more acceptable to not directly address whether there is apocalyptic risk, but I think many people I know would have definitely said “Oh to be clear I totally disagree with the previous two people, there are definitely apocalyptic risks and we are not prepared for them and cannot deal with them after-the-fact (as you just mentioned being concerned about).”
Extra detail on what happened
Everyone who answered explicitly avoided making timeline predictions and instead talked about where they think the policy focus should be.
The first person roughly said “We have many problems with AI right now, let’s focus on addressing those.”
The middle person said the AI problems are all of the sort “people being sent to jail because of an errant ML system”.
Here’s the middle person in full, clearly responding to the question of whether there’s apocalyptic risks to be worried about:
Matheny didn’t disagree with them and didn’t address the question of whether it’s apocalyptic, just said he was uncertain, and then listed the policies he wanted to see: setting standards with 3rd party audits, and governance of hardware supply chain to track it and control that it doesn’t go to places that aren’t democracies.
To not state that you disagree with the last two positions signals that you agree with them, as the absence of your disagreement is evidence of the absence of disagreement. I don’t think Matheny outright said anything false but I think it is a bit misleading to not say “I totally disagree, I think the new tech will be akin to inventing a whole new superintelligent alien species that may kill us all and take over the universe” if something like that is what you believe.
My read is that he was really trying as hard as he could to not address whether there are apocalyptic risks and instead just focus on encouraging the sorts of policies he thought should be implemented.
Why, though?
Does he know something we don’t? Does he think that if he expresses that those risks are real he’ll lose political capital? People won’t put him or his friends in positions of power, because he’ll be branded as a kook?
Is he just in the habit of side-stepping the weird possibilities?
This looks to me, from the outside, like an unforced error. They were asking the question, about some core beliefs, pretty directly. It seems like it would help if, in every such instance, the EA people who think that the world might be destroyed by AGI in the next 20 years, say that they think that the world might be destroyed by AGI in the next 20 years.
As Ben said, this seems incongruent with the responses that the other two people gave, neither of which talked that much about timelines, but did seem to directly respond to the concern about catastrophic/apocalyptic risk from AGI.
I do agree that it’s plausible that Matheny somehow understood the question differently from the other two people, and interpreted it in a more timelines focused way, though he also heard the other two people talk, which makes that somewhat less likely. I do agree that the question wasn’t asked in the most cogent way.
Thanks for checking this! I mostly agree with all your original comment now (except the first part suggesting it was point blank, but we’re quibbling over definitions at this point), this does seem like a case of intentionally not discussing risk
A few other examples off the top of my head:
ARC graph on RSPs with the “safe zone” part
Anthropic calling ASL-4 accidental risks “speculative”
the recent TIME article saying there’s no trade off between progress and safety
More generally, for having talked to many AI policy/safety members, I can say it’s a very common pattern. At the eve of the FLI open letter, one of the most senior persons in the AI governance & policy X risk community was explaining that it was stupid to write this letter and that it would make future policy efforts much more difficult etc.