I feel (mostly from observing an omission (I admit I have not yet RTFB)) that the international situation is not correctly countenanced here. This bit is starting to grapple with it:
Plan for preventing use, access and reverse engineering in places that lack adequate AI safety legislation.
Other than that, it seems like this bill basically thinks that America is the only place on Earth that exists and has real computers and can make new things????
And even, implicitly in that clause, the worry is “Oh no! What if those idiots out there in the wild steal our high culture and advanced cleverness!”
However, I expect other countries with less legislation to swiftly sweep into being much more “advanced” (closer to being eaten by artificial general super-intelligence) by default.
It isn’t going to be super hard to make this stuff, its just that everyone smart refuses to work on it because they don’t want to die. Unfortunately, even midwits can do this. Hence (if there is real danger) we probably need legislative restrictions.
That is: the whole point of the legislation is basically to cause “fast technological advancement to reliably and generally halt” (like we want the FAISA to kill nearly all dramatic and effective AI innovation (similarly to how the FDA kills nearly all dramatic and effective Drug innovation, and similar to how the Nuclear Regulatory Commission killed nearly all nuclear power innovation and nuclear power plant construction for decades)).
If other countries are not similarly hampered by having similar FAISAs of their own, then they could build an Eldritch Horror and it could kill everyone.
I feel that we should be clear that the core goal here is to destroy innovative capacity, in AI, in general, globally, because we fear that innovation has a real chance, by default, by accident, of leading to “automatic human extinction”.
The smart and non-evil half of the NIH keeps trying to ban domestic Gain-of-Function research… so people can just do that in Norway and Wuhan instead. It still can kill lots of people, because it wasn’t taken seriously in the State Department, and we have no global restriction on Gain-of-Function. The Biological Weapons Convention exists, but the BWC is wildly inadequate on its face.
The real and urgent threat model here is (1) “artificial general superintelligence” arises and (2) gets global survive and spread powers and then (3) thwarts all human aspirations like we would thwart the aspirations of ants in our kitchen.
You NEED global coordination to stop this EVERYWHERE or you’re just re-arranging who, in the afterlife, everyone will be pointing at to blame them for the end of humanity.
The goal isn’t to be blameless and dead. The goal is the LIVE. The goal is to reliably and “on purpose” survive and thrive, in humanistically delightful ways, in the coming decades, centuries, and millennia.
If extinction from non-benevolent artificial superintelligence is a real fear, then it needs international coordination. If this is not a real fear, then we probably don’t need the FAISA in the US.
So where is the mention of a State Department loop? Where is the plan for diplomacy? Where are China or Russia or the EU or Brazil or Taiwan or the UAE or anyone but America mentioned?
Rather than have America hope to “set a fashion” (that would obviously (to my mind) NOT be “followed based on the logic of fashion”) in countries that hate us, like North Korea and so on...
I would prefer to reliably and adequately cover EVERY base that needs to be covered and I think this would work best if people in literally every American consulate in every country (and also at least one person for every country with no diplomatic delegation at all) were tracking the local concerns, and trying to get a global FAISA deal done.
If I might rewrite this a bit:
The goal isn’t FOR AMERICA to be blameless and EVERYONE to be dead. The goal is for ALL HUMANS ON EARTH to LIVE. The goal is to reliably and “on purpose” survive and thrive, on Earth, in general, even for North Koreans, in humanistically delightful ways, in the coming decades, centuries, and millennia.
The internet is everywhere. All software is intrinsically similar to a virus. “Survive and spread” capabilities in software are the default, even for software that lacks general intelligence.
If we actually believe that AGI convergently heads towards “not aligned with Benevolence, and not aligned with Natural Law, and not caring about humans, nor even caring about AI with divergent artificial provenances” but rather we expect each AGI to head toward “control of all the atoms and joules by any means necessary”… then we had better stop each and every such AGI very soon, everywhere, thoroughly.
@Zach Stein-Perlman I’m not really sure why you gave a thumbs-down. Probably you’re not trying to communicate that you think there shouldn’t be deontological injunctions against genocide. I think someone renouncing any deontological injunctions against such devastating and irreversible actions would be both pretty scary and reprehensible. But I failed to come up with a different hypothesis for what you are communicating with a thumbs-down on that statement (to be clear I wouldn’t be surprised if you provided one).
Suppose you can take an action that decreases net P(everyone dying) but increases P(you yourself kill everyone), and leaves all else equal. I claim you should take it; everyone is better off if you take it.
I deny “deontological injunctions.” I want you and everyone to take the actions that lead to the best outcomes, not that keep your own hands clean. I’m puzzled by your expectation that I’d endorse “deontological injunctions.”
This situation seems identical to the trolley problem in the relevant ways. I think you should avoid letting people die, not just avoid killing people.
[Note: I roughly endorse heuristics like if you’re contemplating crazy-sounding actions for strange-sounding reasons, you should suspect that you’re confused about your situation or the effects of your actions, and you should be more cautious than your naive calculations suggest. But that’s very different from deontology.]
I think I have a different overall take than Ben here, but, the frame I think makes sense here is to be like: “Deontological injuctions are guardrails. There are hypothetical situations (and, some real situations) where it’s correct to override them, but the guardrail should have some weight and for more important guardrails, you need a clearer reasoning for why avoiding it actually helps.”
I don’t know what I think about this in the case of a country passing laws. Countries aren’t exactly agents. Passing novel laws is different than following existing laws. But, I observe:
it’s really hard to be confident about longterm consequences of things. Consequentialism just isn’t actually compute-efficient enough to be what you use most of the time for making decisions. (This includes but isn’t limited to “you’re contemplating crazy sounding actions for strange sounding reasons”, although I think has a similar generator)
it matters just not what you-in-particular-in-a-vacuum do, in one particular timeslice. It matters how complicated the world is to reason about. If everyone is doing pure consequentialism all the time, you have to model the way each person is going to interpret consequences with their own special-snowflake worldview. Having to model “well, Alice and Bob and Charlie and 1000s of other people might decide to steal from me, or from my friends, if the benefits were high enough and they thought they could get away with it” adds a tremendous amount of overhead.
You should be looking for moral reasoning that makes you simple to reason about, and that perform well in most cases. That’s a lot of what deontology is for.
I feel (mostly from observing an omission (I admit I have not yet RTFB)) that the international situation is not correctly countenanced here. This bit is starting to grapple with it:
Other than that, it seems like this bill basically thinks that America is the only place on Earth that exists and has real computers and can make new things????
And even, implicitly in that clause, the worry is “Oh no! What if those idiots out there in the wild steal our high culture and advanced cleverness!”
However, I expect other countries with less legislation to swiftly sweep into being much more “advanced” (closer to being eaten by artificial general super-intelligence) by default.
It isn’t going to be super hard to make this stuff, its just that everyone smart refuses to work on it because they don’t want to die. Unfortunately, even midwits can do this. Hence (if there is real danger) we probably need legislative restrictions.
That is: the whole point of the legislation is basically to cause “fast technological advancement to reliably and generally halt” (like we want the FAISA to kill nearly all dramatic and effective AI innovation (similarly to how the FDA kills nearly all dramatic and effective Drug innovation, and similar to how the Nuclear Regulatory Commission killed nearly all nuclear power innovation and nuclear power plant construction for decades)).
If other countries are not similarly hampered by having similar FAISAs of their own, then they could build an Eldritch Horror and it could kill everyone.
Russia didn’t have an FDA, and invented their own drugs.
France didn’t have the NRC, and built an impressively good system of nuclear power generation.
I feel that we should be clear that the core goal here is to destroy innovative capacity, in AI, in general, globally, because we fear that innovation has a real chance, by default, by accident, of leading to “automatic human extinction”.
The smart and non-evil half of the NIH keeps trying to ban domestic Gain-of-Function research… so people can just do that in Norway and Wuhan instead. It still can kill lots of people, because it wasn’t taken seriously in the State Department, and we have no global restriction on Gain-of-Function. The Biological Weapons Convention exists, but the BWC is wildly inadequate on its face.
The real and urgent threat model here is (1) “artificial general superintelligence” arises and (2) gets global survive and spread powers and then (3) thwarts all human aspirations like we would thwart the aspirations of ants in our kitchen.
You NEED global coordination to stop this EVERYWHERE or you’re just re-arranging who, in the afterlife, everyone will be pointing at to blame them for the end of humanity.
The goal isn’t to be blameless and dead. The goal is the LIVE. The goal is to reliably and “on purpose” survive and thrive, in humanistically delightful ways, in the coming decades, centuries, and millennia.
If extinction from non-benevolent artificial superintelligence is a real fear, then it needs international coordination. If this is not a real fear, then we probably don’t need the FAISA in the US.
So where is the mention of a State Department loop? Where is the plan for diplomacy? Where are China or Russia or the EU or Brazil or Taiwan or the UAE or anyone but America mentioned?
Two obvious points:
It is deontologically more ethical to not yourself kill everyone in the world.
America has an incredible ability to set fashions, and if it took on these policies then I think a great number of others would follow suit.
Rather than have America hope to “set a fashion” (that would obviously (to my mind) NOT be “followed based on the logic of fashion”) in countries that hate us, like North Korea and so on...
I would prefer to reliably and adequately cover EVERY base that needs to be covered and I think this would work best if people in literally every American consulate in every country (and also at least one person for every country with no diplomatic delegation at all) were tracking the local concerns, and trying to get a global FAISA deal done.
If I might rewrite this a bit:
The internet is everywhere. All software is intrinsically similar to a virus. “Survive and spread” capabilities in software are the default, even for software that lacks general intelligence.
If we actually believe that AGI convergently heads towards “not aligned with Benevolence, and not aligned with Natural Law, and not caring about humans, nor even caring about AI with divergent artificial provenances” but rather we expect each AGI to head toward “control of all the atoms and joules by any means necessary”… then we had better stop each and every such AGI very soon, everywhere, thoroughly.
@Zach Stein-Perlman I’m not really sure why you gave a thumbs-down. Probably you’re not trying to communicate that you think there shouldn’t be deontological injunctions against genocide. I think someone renouncing any deontological injunctions against such devastating and irreversible actions would be both pretty scary and reprehensible. But I failed to come up with a different hypothesis for what you are communicating with a thumbs-down on that statement (to be clear I wouldn’t be surprised if you provided one).
Suppose you can take an action that decreases net P(everyone dying) but increases P(you yourself kill everyone), and leaves all else equal. I claim you should take it; everyone is better off if you take it.
I deny “deontological injunctions.” I want you and everyone to take the actions that lead to the best outcomes, not that keep your own hands clean. I’m puzzled by your expectation that I’d endorse “deontological injunctions.”
This situation seems identical to the trolley problem in the relevant ways. I think you should avoid letting people die, not just avoid killing people.
[Note: I roughly endorse heuristics like if you’re contemplating crazy-sounding actions for strange-sounding reasons, you should suspect that you’re confused about your situation or the effects of your actions, and you should be more cautious than your naive calculations suggest. But that’s very different from deontology.]
I think I have a different overall take than Ben here, but, the frame I think makes sense here is to be like: “Deontological injuctions are guardrails. There are hypothetical situations (and, some real situations) where it’s correct to override them, but the guardrail should have some weight and for more important guardrails, you need a clearer reasoning for why avoiding it actually helps.”
I don’t know what I think about this in the case of a country passing laws. Countries aren’t exactly agents. Passing novel laws is different than following existing laws. But, I observe:
it’s really hard to be confident about longterm consequences of things. Consequentialism just isn’t actually compute-efficient enough to be what you use most of the time for making decisions. (This includes but isn’t limited to “you’re contemplating crazy sounding actions for strange sounding reasons”, although I think has a similar generator)
it matters just not what you-in-particular-in-a-vacuum do, in one particular timeslice. It matters how complicated the world is to reason about. If everyone is doing pure consequentialism all the time, you have to model the way each person is going to interpret consequences with their own special-snowflake worldview. Having to model “well, Alice and Bob and Charlie and 1000s of other people might decide to steal from me, or from my friends, if the benefits were high enough and they thought they could get away with it” adds a tremendous amount of overhead.
You should be looking for moral reasoning that makes you simple to reason about, and that perform well in most cases. That’s a lot of what deontology is for.