Well, there’s a significant probability COVID isn’t a “natural” pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.
If there’s an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it’s a good bet that however interested “the powers that be” are in GOF, they’re probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.
Nuclear regulation also might not be a good example of a disaster forcing meaningful regulation because the real pressure was against military use of nuclear power and that seems to have enjoyed general immunity against real regulation. So it’s more like if an AI incident results in the general public being banned from buying GPUs or something while myriad AI labs still churn toward AGI.
There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it’s a good bet that however interested “the powers that be” are in GOF, they’re probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.
My main thesis regarding how a non-existential AI disaster would happen in practice is (and I don’t think this would happen), Google or Facebook or some other large tech company publicly releases an agent that’s intelligent enough to be middling at wargames but not enough to do things like creative ML research, and people put it in one or more of IOT devices/critical infrastructure/ military equipment. Surprise: it has a bad value function and/or edge case behavior, and a group of agents end up deliberately and publicly defecting and successfully killing large numbers of people.
In this scenario, it would be extremely obvious that the party responsible for marketing and selling the AI was FaceGoog, and no matter what the Powers That Be wanted, the grieving would be directing their anger towards those engineers. Politicians wouldn’t individually give much of a shit about the well being of The Machine and instead be racing to see who could make the most visible condemnations of Big Tech and argue over which party predicted this would happen all along. Journalists would do what they always do and spin the story according to their individual political ideologies and not according to some institutional incentives, which would be more about painting their political opponents as Big Tech supporters than instrumentally supporting the engineers. Whatever company was responsible for the problem would, at a minimum, shutter all AI research. Congress would pass some laws written by their lobbyist consultants, of whom who knows, maybe even one or two could even be said to be “alignment people”, and there’s a new body of oversight analogous to the FDA for biotech companies.
And I appreciate the viewpoint that this is either just one timeline, or relies on premises that might be untrue, but in my head at least it just seems like it falls into place without making many critical assumptions.
Generally. I endorse the comparison of AI with nuclear weapons (especially because AI is currently being mounted on literal nuclear weapons).
But in this case, there’s a really big distinction that should be made between mass-media and specialized institutions. Intelligence/military agencies, specialized Wall-street analyst firms, and bureaucracy leadership all probably know things like exactly how frequently Covid causes brain damage and have the best forecasters predicting the next outbreak. For them, it’s less about spinning stories, and more about figuring out what type of professional employees tend to write accurate/predictive reports and forecasts. Spun stories are certainly more influential then they were 10 years ago, and vastly more influential than they appear to the uninitiated, but I don’t know if we’ve gotten to the point where they can fool the professionals at not getting fooled.
Arms control has happened in the past even though it was difficult to verify, and nuclear weapons were centralized by default so it’s hard to know anything about how hard it is to centralize that sort of thing.
With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.
I don’t know if we’ve gotten to the point where they can fool the professionals at not getting fooled
Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks in large part to a spun story. Maybe the forecasters at the above places know better but none of the entities who might act on that information are necessarily incentivized to push for regulation as a result. So it’s not clear it would matter if most forecasters know AI is probably responsible for some murky disaster while the public believes humans are responsible.
Well, there’s a significant probability COVID isn’t a “natural” pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.
If there’s an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it’s a good bet that however interested “the powers that be” are in GOF, they’re probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.
Nuclear regulation also might not be a good example of a disaster forcing meaningful regulation because the real pressure was against military use of nuclear power and that seems to have enjoyed general immunity against real regulation. So it’s more like if an AI incident results in the general public being banned from buying GPUs or something while myriad AI labs still churn toward AGI.
My main thesis regarding how a non-existential AI disaster would happen in practice is (and I don’t think this would happen), Google or Facebook or some other large tech company publicly releases an agent that’s intelligent enough to be middling at wargames but not enough to do things like creative ML research, and people put it in one or more of IOT devices/critical infrastructure/ military equipment. Surprise: it has a bad value function and/or edge case behavior, and a group of agents end up deliberately and publicly defecting and successfully killing large numbers of people.
In this scenario, it would be extremely obvious that the party responsible for marketing and selling the AI was FaceGoog, and no matter what the Powers That Be wanted, the grieving would be directing their anger towards those engineers. Politicians wouldn’t individually give much of a shit about the well being of The Machine and instead be racing to see who could make the most visible condemnations of Big Tech and argue over which party predicted this would happen all along. Journalists would do what they always do and spin the story according to their individual political ideologies and not according to some institutional incentives, which would be more about painting their political opponents as Big Tech supporters than instrumentally supporting the engineers. Whatever company was responsible for the problem would, at a minimum, shutter all AI research. Congress would pass some laws written by their lobbyist consultants, of whom who knows, maybe even one or two could even be said to be “alignment people”, and there’s a new body of oversight analogous to the FDA for biotech companies.
And I appreciate the viewpoint that this is either just one timeline, or relies on premises that might be untrue, but in my head at least it just seems like it falls into place without making many critical assumptions.
Generally. I endorse the comparison of AI with nuclear weapons (especially because AI is currently being mounted on literal nuclear weapons).
But in this case, there’s a really big distinction that should be made between mass-media and specialized institutions. Intelligence/military agencies, specialized Wall-street analyst firms, and bureaucracy leadership all probably know things like exactly how frequently Covid causes brain damage and have the best forecasters predicting the next outbreak. For them, it’s less about spinning stories, and more about figuring out what type of professional employees tend to write accurate/predictive reports and forecasts. Spun stories are certainly more influential then they were 10 years ago, and vastly more influential than they appear to the uninitiated, but I don’t know if we’ve gotten to the point where they can fool the professionals at not getting fooled.
Arms control has happened in the past even though it was difficult to verify, and nuclear weapons were centralized by default so it’s hard to know anything about how hard it is to centralize that sort of thing.
With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.
Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks in large part to a spun story. Maybe the forecasters at the above places know better but none of the entities who might act on that information are necessarily incentivized to push for regulation as a result. So it’s not clear it would matter if most forecasters know AI is probably responsible for some murky disaster while the public believes humans are responsible.