We probably have a ban on gain-of-function research in the bag, since it seems relatively easy to persuade intellectuals of the merits of the idea.
Is this the case? Like, we had a moratorium on federal funding (not even on doing it, just whether or not taxpayers would pay for it), and it was controversial, and then we dropped it after 3 years.
You might have thought that it would be a slam dunk after there was a pandemic for which lab leak was even a plausible origin, but the people who would have been considered most responsible quickly jumped into the public sphere and tried really hard to discredit the idea. I think this is part of a general problem, which is that special interests are very committed to an issue and the public is very uncommitted, and that balance generally favors the special interests. [It’s Peter Daszak’s life on the line for the lab leak hypothesis, and a minor issue to me.] I suspect that if it ever looks like “getting rid of algorithms” is seriously on the table, lots of people will try really hard to prevent that from becoming policy.
Is this the case? Like, we had a moratorium on federal funding (not even on doing it, just whether or not taxpayers would pay for it), and it was controversial, and then we dropped it after 3 years.
And more crucially, it didn’t even stop the federal funding of Baric while it was in place. The equivalent would be that you outlaw AGI development but do nothing about people training tool AI’s and people simply declaring their development as tool AI development in response to the regulation.
It’s certainly fairly easy to persuade people that it’s a good idea, but you might be right that asymmetric lobbying can keep good ideas off the table indefinitely. On the other hand, ‘cigarettes cause cancer’ to ‘smoking bans’ took about fifty years despite an obvious asymmetry in favour of tobacco.
As I say, politics is all rather opaque to me, but once an idea is universally agreed amongst intellectuals it does seem to eventually result in political action.
Is this the case? Like, we had a moratorium on federal funding (not even on doing it, just whether or not taxpayers would pay for it), and it was controversial, and then we dropped it after 3 years.
You might have thought that it would be a slam dunk after there was a pandemic for which lab leak was even a plausible origin, but the people who would have been considered most responsible quickly jumped into the public sphere and tried really hard to discredit the idea. I think this is part of a general problem, which is that special interests are very committed to an issue and the public is very uncommitted, and that balance generally favors the special interests. [It’s Peter Daszak’s life on the line for the lab leak hypothesis, and a minor issue to me.] I suspect that if it ever looks like “getting rid of algorithms” is seriously on the table, lots of people will try really hard to prevent that from becoming policy.
And more crucially, it didn’t even stop the federal funding of Baric while it was in place. The equivalent would be that you outlaw AGI development but do nothing about people training tool AI’s and people simply declaring their development as tool AI development in response to the regulation.
It’s certainly fairly easy to persuade people that it’s a good idea, but you might be right that asymmetric lobbying can keep good ideas off the table indefinitely. On the other hand, ‘cigarettes cause cancer’ to ‘smoking bans’ took about fifty years despite an obvious asymmetry in favour of tobacco.
As I say, politics is all rather opaque to me, but once an idea is universally agreed amongst intellectuals it does seem to eventually result in political action.