Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up—as with Frank Herbert’s Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).
Is it insane to think that a limited nuclear conflict (as seems to be an increasingly likely possibility at the moment) might actual raise humanities chances of long term survival—if it disrupted global economies severely for few decades and in particular messed up chip production.
Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up
Part of why I am posting this is in case that happens, so people are clear what side I am on.
Well, my model says that what really matters is the opinions of the power-wielding decision makers, and that ‘popular opinion’ doesn’t actually carry much weight in deciding what the US government does. Much less the Chinese government, or the leadership of large corporations.
So my view is that it is the decision-makers currently imagining that the poisoned banana will grant them increased wealth & power who need their minds changed.
So my view is that it is the decision-makers currently imagining that the poisoned banana will grant them increased wealth & power who need their minds changed.
My current sense is that efforts to reach the poisoned banana are mostly not driven by politicians. It’s not like Joe Biden or Xi Jinping are pushing for AGI, and even Putin’s comments on AI look like near-term surveillance / military stuff, not automated science and engineering.
Yeah, I agree that that’s what the current situation looks like. More tech CEOs making key decisions than politicians. However, I think the strategic landscape may change quite quickly once real world effects become more apparent. In either case, I think it’s the set of decision makers holding the reins (whoever that may consist of) who need to be updated. I’m pretty sure that the ‘American Public’ or ‘European Public’ could have an influence, but probably not at the level of simply answering ‘AI is scary’ on a poll. Probably there’d need to be like, widespread riots.
I’m hopeful that the politicians of the various nations who might initiate this conflict can see how badly that would turn out for them personally, and thus find sufficient excuses to avoid rushing into that scenario. Not certain by any means, but hopeful. There certainly will need to be some tense negotiations, at the least.
Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up—as with Frank Herbert’s Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).
Is it insane to think that a limited nuclear conflict (as seems to be an increasingly likely possibility at the moment) might actual raise humanities chances of long term survival—if it disrupted global economies severely for few decades and in particular messed up chip production.
Part of why I am posting this is in case that happens, so people are clear what side I am on.
Popular support is already >70% for stopping development of AI. Why think that’s not enough, and that populations aren’t already awake?
Well, my model says that what really matters is the opinions of the power-wielding decision makers, and that ‘popular opinion’ doesn’t actually carry much weight in deciding what the US government does. Much less the Chinese government, or the leadership of large corporations.
So my view is that it is the decision-makers currently imagining that the poisoned banana will grant them increased wealth & power who need their minds changed.
My current sense is that efforts to reach the poisoned banana are mostly not driven by politicians. It’s not like Joe Biden or Xi Jinping are pushing for AGI, and even Putin’s comments on AI look like near-term surveillance / military stuff, not automated science and engineering.
Yeah, I agree that that’s what the current situation looks like. More tech CEOs making key decisions than politicians. However, I think the strategic landscape may change quite quickly once real world effects become more apparent. In either case, I think it’s the set of decision makers holding the reins (whoever that may consist of) who need to be updated. I’m pretty sure that the ‘American Public’ or ‘European Public’ could have an influence, but probably not at the level of simply answering ‘AI is scary’ on a poll. Probably there’d need to be like, widespread riots.
It’s not at all insane IMO. If AGI is “dangerous” x timelines are “short” x anthropic reasoning is valid...
… Then WW3 will probably happen “soon” (2020s).
https://twitter.com/powerfultakes/status/1713451023610634348
I’ll develop this into a post soonish.
I’m hopeful that the politicians of the various nations who might initiate this conflict can see how badly that would turn out for them personally, and thus find sufficient excuses to avoid rushing into that scenario. Not certain by any means, but hopeful. There certainly will need to be some tense negotiations, at the least.