For 1 (probability of AGI) in particular: I think in addition to probably thinking inside view that AGI is harder than Eliezer/MIRI think it is, I also think civilization’s dysfunctions are more likely to disrupt things and make it increasingly difficult to do anything at all, or anything new/difficult, and also collapse or other disasters. I know Nate Sores explicitly rejects this mattering much, but it matters inside view to me quite a bit.
I’d like to believe this, but the coronavirus disaster gives me pause. Seems like the ONE relevant bit of powerful science/technology that wasn’t heavily restricted or outright banned was gain-of-function research, which may or may not have been responsible for the whole mess in the first place (and certainly raises the danger of it happening again).
And I notice that the same forces/people/institutions who unpersuasively defend the FDA’s anti-vaccine policies unpersuasively defend the legality of GoF… I honestly don’t have any model of what’s going on there — and what these forces/people/institutions said about the White House’s push boosters convinces me it is more complicated than instinctively aligning with authority, power, or partisan interests. Does anyone have a model for this?
But in lieu of real understanding: I don’t think we can count on our civilizational dysfunction to accidentally coincidentally help us here. If our civilization can’t manage stop GoF, while it simultaneously outlaws vaccines and HCTs, I don’t think we should expect it do slow down AI by very much.
I don’t literally expect the scenario where, say… the outrage machine calls for banning AI Alignment research and successfully restricts it, while our civilization feverishly pours all of its remaining ability to Do into scaling AI. But I don’t expect it to be much better than that, from a civilizational competence point of view. (At least not on the current path, and I don’t currently see any members of this community making any massive heroic efforts to change that that look like they will actually succeed.)
I’d like to believe this, but the coronavirus disaster gives me pause. Seems like the ONE relevant bit of powerful science/technology that wasn’t heavily restricted or outright banned was gain-of-function research, which may or may not have been responsible for the whole mess in the first place (and certainly raises the danger of it happening again).
And I notice that the same forces/people/institutions who unpersuasively defend the FDA’s anti-vaccine policies unpersuasively defend the legality of GoF… I honestly don’t have any model of what’s going on there — and what these forces/people/institutions said about the White House’s push boosters convinces me it is more complicated than instinctively aligning with authority, power, or partisan interests. Does anyone have a model for this?
But in lieu of real understanding: I don’t think we can count on our civilizational dysfunction to accidentally coincidentally help us here. If our civilization can’t manage stop GoF, while it simultaneously outlaws vaccines and HCTs, I don’t think we should expect it do slow down AI by very much.
I don’t literally expect the scenario where, say… the outrage machine calls for banning AI Alignment research and successfully restricts it, while our civilization feverishly pours all of its remaining ability to Do into scaling AI. But I don’t expect it to be much better than that, from a civilizational competence point of view. (At least not on the current path, and I don’t currently see any members of this community making any massive heroic efforts to change that that look like they will actually succeed.)