Thanks for clarifying! I agree the twitter thread doesn’t look convincing.
IIUC your hypothesis, then translating it to AI Governance issue, it’s important to first get general public on your side, so that politicians find it in their interest to do something about it.
If so, then perhaps meanwhile we should provide those politicians with a set of experts they could outsource the problem of defining the right policy to? I suspect politicians do not write rules themselves in situations like that, they rather seek people considered experts by the public opinion? I worry, that politicians may want to use this occasion to win something more than public support, say money/favor from companies, and hence pick not the right experts/laws—hence perhaps it is important to not only work on public perception of the threat but also on who the public considers experts?
Thanks for clarifying! I agree the twitter thread doesn’t look convincing.
IIUC your hypothesis, then translating it to AI Governance issue, it’s important to first get general public on your side, so that politicians find it in their interest to do something about it.
If so, then perhaps meanwhile we should provide those politicians with a set of experts they could outsource the problem of defining the right policy to? I suspect politicians do not write rules themselves in situations like that, they rather seek people considered experts by the public opinion? I worry, that politicians may want to use this occasion to win something more than public support, say money/favor from companies, and hence pick not the right experts/laws—hence perhaps it is important to not only work on public perception of the threat but also on who the public considers experts?