No, the question was about whether there are apocalyptic risks and on what timeline we should be concerned about apocalyptic risks.
The questioner used the term ‘apocalyptic’ specifically. Three people answered the question, and the first two both also alluded to ‘apocalyptic’ risks and sort of said that they didn’t really think we need to think about that possibility. Them referring to apocalyptic risks goes to show that it was a key part of what the questioner wanted to understand — to what extent these risks are real and on what timeline we’ll need to react to them. My read is not that Matheny actively misled the speaker, but that he avoided answering, which is “hiding” rather than “lying” (I don’t agree with the OP that they’re identical).
I think the question was unclear so it was more acceptable to not directly address whether there is apocalyptic risk, but I think many people I know would have definitely said “Oh to be clear I totally disagree with the previous two people, there are definitely apocalyptic risks and we are not prepared for them and cannot deal with them after-the-fact (as you just mentioned being concerned about).”
Extra detail on what happened
Everyone who answered explicitly avoided making timeline predictions and instead talked about where they think the policy focus should be.
The first person roughly said “We have many problems with AI right now, let’s focus on addressing those.”
The middle person said the AI problems are all of the sort “people being sent to jail because of an errant ML system”.
Here’s the middle person in full, clearly responding to the question of whether there’s apocalyptic risks to be worried about:
People ask me what keeps me up at night. AGI does not keep me up at night. And the reason why it doesn’t, is because (as Ms Gibbons mentioned) the problems we are likely to face, with the apocalyptic visions of AGI, are the same problems we are already facing right now, with the systems that are already in play. I worry about people being sent to jail because of an errant ML system. Whether you use some fancy AGI to do the same thing, it’s the same problem… My bet is that the harms we’re going to see, as these more powerful systems come online — even with ChatGPT — are no different from the harms we’re seeing right now. So if we focus our efforts and our energies on governance and regulation and guardrails to address the harms we’re seeing right now, they will be able to adjust as the technology improves. I am not worried that what we put in place today will be out of date or out of sync with the new tech. The new tech is like the old tech, just supercharged.
Matheny didn’t disagree with them and didn’t address the question of whether it’s apocalyptic, just said he was uncertain, and then listed the policies he wanted to see: setting standards with 3rd party audits, and governance of hardware supply chain to track it and control that it doesn’t go to places that aren’t democracies.
To not state that you disagree with the last two positions signals that you agree with them, as the absence of your disagreement is evidence of the absence of disagreement. I don’t think Matheny outright said anything false but I think it is a bit misleading to not say “I totally disagree, I think the new tech will be akin to inventing a whole new superintelligent alien species that may kill us all and take over the universe” if something like that is what you believe.
My read is that he was really trying as hard as he could to not address whether there are apocalyptic risks and instead just focus on encouraging the sorts of policies he thought should be implemented.
My read is that he was really trying as hard as he could to not address whether there are apocalyptic risks and instead just focus on encouraging the sorts of policies he thought should be implemented.
Why, though?
Does he know something we don’t? Does he think that if he expresses that those risks are real he’ll lose political capital? People won’t put him or his friends in positions of power, because he’ll be branded as a kook?
Is he just in the habit of side-stepping the weird possibilities?
This looks to me, from the outside, like an unforced error. They were asking the question, about some core beliefs, pretty directly. It seems like it would help if, in every such instance, the EA people who think that the world might be destroyed by AGI in the next 20 years, say that they think that the world might be destroyed by AGI in the next 20 years.
No, the question was about whether there are apocalyptic risks and on what timeline we should be concerned about apocalyptic risks.
The questioner used the term ‘apocalyptic’ specifically. Three people answered the question, and the first two both also alluded to ‘apocalyptic’ risks and sort of said that they didn’t really think we need to think about that possibility. Them referring to apocalyptic risks goes to show that it was a key part of what the questioner wanted to understand — to what extent these risks are real and on what timeline we’ll need to react to them. My read is not that Matheny actively misled the speaker, but that he avoided answering, which is “hiding” rather than “lying” (I don’t agree with the OP that they’re identical).
I think the question was unclear so it was more acceptable to not directly address whether there is apocalyptic risk, but I think many people I know would have definitely said “Oh to be clear I totally disagree with the previous two people, there are definitely apocalyptic risks and we are not prepared for them and cannot deal with them after-the-fact (as you just mentioned being concerned about).”
Extra detail on what happened
Everyone who answered explicitly avoided making timeline predictions and instead talked about where they think the policy focus should be.
The first person roughly said “We have many problems with AI right now, let’s focus on addressing those.”
The middle person said the AI problems are all of the sort “people being sent to jail because of an errant ML system”.
Here’s the middle person in full, clearly responding to the question of whether there’s apocalyptic risks to be worried about:
Matheny didn’t disagree with them and didn’t address the question of whether it’s apocalyptic, just said he was uncertain, and then listed the policies he wanted to see: setting standards with 3rd party audits, and governance of hardware supply chain to track it and control that it doesn’t go to places that aren’t democracies.
To not state that you disagree with the last two positions signals that you agree with them, as the absence of your disagreement is evidence of the absence of disagreement. I don’t think Matheny outright said anything false but I think it is a bit misleading to not say “I totally disagree, I think the new tech will be akin to inventing a whole new superintelligent alien species that may kill us all and take over the universe” if something like that is what you believe.
My read is that he was really trying as hard as he could to not address whether there are apocalyptic risks and instead just focus on encouraging the sorts of policies he thought should be implemented.
Why, though?
Does he know something we don’t? Does he think that if he expresses that those risks are real he’ll lose political capital? People won’t put him or his friends in positions of power, because he’ll be branded as a kook?
Is he just in the habit of side-stepping the weird possibilities?
This looks to me, from the outside, like an unforced error. They were asking the question, about some core beliefs, pretty directly. It seems like it would help if, in every such instance, the EA people who think that the world might be destroyed by AGI in the next 20 years, say that they think that the world might be destroyed by AGI in the next 20 years.