But, the goal of this phase, is to establish “hey, we have dangerous AI, and we don’t yet have the ability to reasonably demonstrate we can render it non-dangerous”, and stop development of AI until companies reasonably figure out some plans that at _least_ make enough sense to government officials.
I think I very strongly expect corruption-by-default in the long run?
Also, since the government of California is a “long run bureaucracy” already I naively expect it to appoint “corrupt by default” people unless this is explicitly prevented in the text of the law somehow.
Like maybe there could be a proportionally representative election (or sortition?) over a mixture of the (1) people who care (artists and luddites and so on) and (2) people who know (ML engineers and CS PhDs and so on) and (3) people who are wise about conflicts (judges and DAs and SEC people and divorce lawyers and so on).
I haven’t read the bill in its modern current form. Do you know if it explains a reliable method to make sure that “the actual government officials who make the judgement call” will exist via methods that make it highly likely that they will be honest and prudent about what is actually dangerous when the chips are down and cards turned over, or not?
Also, is there an expiration date?
Like… if California’s bureaucracy still (1) is needed and (2) exists… by the time 2048 rolls around (a mere 24 years from now (which is inside the life expectancy of most people, and inside the career planning horizon of everyone smart who is in college right now)) then I would be very very very surprised.
By 2048 I expect (1) California (and maybe humans) to not exist, or else (2) for a pause to have happened and, in that case, a subnational territory isn’t the right level for Pause Maintenance Institution to draw authority from, or else (3) I expect doomer premises to be deeply falsified based on future technical work related to “inevitably convergent computational/evolutionary morality” (or some other galaxy brained weirdness).
Either we are dead by then, or wrong about whether superintelligence was even possible, or we managed to globally ban AGI in general, or something.
So it seems like it would be very reasonable to simply say that in 2048 the entire thing has to be disbanded, and a brand new thing started up with all new people, to have some OTHER way break the “naturally but sadly arising” dynamics of careerist political corruption.
I’m not personally attached to 2048 specifically, but I think some “expiration date” that is farther in the future than 6 years, and also within the lifetime of most of the people participating in the process, would be good.
Will respond in more detail later hopefully, but meanwhile, re:
I haven’t read the bill in its modern current form. Do you know if it explains a reliable method to make sure that “the actual government officials who make the judgement call” will exist via methods that make it highly likely that they will be honest and prudent about what is actually dangerous when the chips are down and cards turned over, or not?
I copied over the text of how the Frontier Model Board gets appointed. (Although note that after amendments, the Frontier Model Board no longer has any explicit power, and can only advise the existing GovOps agency, and the attorney general). Not commenting yet on what this means as an answer to your question.
(c) (1) Commencing January 1, 2026, the Board of Frontier Models shall be composed of nine members, as follows:
(A) A member of the open-source community appointed by the Governor and subject to Senate confirmation.
(B) A member of the artificial intelligence industry appointed by the Governor and subject to Senate confirmation.
(C) An expert in chemical, biological, radiological, or nuclear weapons appointed by the Governor and subject to Senate confirmation.
(D) An expert in artificial intelligence safety appointed by the Governor and subject to Senate confirmation.
(E) An expert in cybersecurity of critical infrastructure appointed by the Governor and subject to Senate confirmation.
(F) Two members who are academics with expertise in artificial intelligence appointed by the Speaker of the Assembly.
(G) Two members appointed by the Senate Rules Committee.
(2) A member of the Board of Frontier Models shall meet all of the following criteria:
(A) A member shall be free of direct and indirect external influence and shall not seek or take instructions from another.
(B) A member shall not take an action or engage in an occupation, whether gainful or not, that is incompatible with the member’s duties.
(C) A member shall not, either at the time of the member’s appointment or during the member’s term, have a financial interest in an entity that is subject to regulation by the board.
(3) A member of the board shall serve at the pleasure of the member’s appointing authority but shall serve for no longer than eight consecutive years.
(Among the Presidential candidates, I liked RFK’s position best. When asked, off the top of his head, he jumps right into extinction risks, totalitarian control of society, and the need for international treaties for AI and bioweapons. I really love how he lumps “bioweapons and AI” as a natural category. It is a natural category.
But RFK dropped out, and even if he hadn’t dropped out it was pretty clear that he had no chance of winning because most US voters seem to think being a hilariously awesome weirdo is bad, and it is somehow so bad that “everyone dying because AI killed us” is like… somehow more important than that badness? (Obviously I’m being facetious. US voters don’t seem to think. They scrupulously avoid seeming that way because only weirdos “seem to think”.))
I’m guessing the expiration date on the law isn’t in there at all, because cynicism predicts that nothing like it would be in there, because that’s not how large corrupt bureaucracies work.
(/me wonders aloud if she should stop calling large old bureaucracies corrupt-by-default in order to start sucking up to Newsome as part of a larger scheme to get onto that board somehow… but prolly not, right? I think my comparative advantage is probably “being performatively autistic in public” which is usually incompatible with acquiring or wielding democratic political power.)
This caught my eye:
I think I very strongly expect corruption-by-default in the long run?
Also, since the government of California is a “long run bureaucracy” already I naively expect it to appoint “corrupt by default” people unless this is explicitly prevented in the text of the law somehow.
Like maybe there could be a proportionally representative election (or sortition?) over a mixture of the (1) people who care (artists and luddites and so on) and (2) people who know (ML engineers and CS PhDs and so on) and (3) people who are wise about conflicts (judges and DAs and SEC people and divorce lawyers and so on).
I haven’t read the bill in its modern current form. Do you know if it explains a reliable method to make sure that “the actual government officials who make the judgement call” will exist via methods that make it highly likely that they will be honest and prudent about what is actually dangerous when the chips are down and cards turned over, or not?
Also, is there an expiration date?
Like… if California’s bureaucracy still (1) is needed and (2) exists… by the time 2048 rolls around (a mere 24 years from now (which is inside the life expectancy of most people, and inside the career planning horizon of everyone smart who is in college right now)) then I would be very very very surprised.
By 2048 I expect (1) California (and maybe humans) to not exist, or else (2) for a pause to have happened and, in that case, a subnational territory isn’t the right level for Pause Maintenance Institution to draw authority from, or else (3) I expect doomer premises to be deeply falsified based on future technical work related to “inevitably convergent computational/evolutionary morality” (or some other galaxy brained weirdness).
Either we are dead by then, or wrong about whether superintelligence was even possible, or we managed to globally ban AGI in general, or something.
So it seems like it would be very reasonable to simply say that in 2048 the entire thing has to be disbanded, and a brand new thing started up with all new people, to have some OTHER way break the “naturally but sadly arising” dynamics of careerist political corruption.
I’m not personally attached to 2048 specifically, but I think some “expiration date” that is farther in the future than 6 years, and also within the lifetime of most of the people participating in the process, would be good.
Will respond in more detail later hopefully, but meanwhile, re:
I copied over the text of how the Frontier Model Board gets appointed. (Although note that after amendments, the Frontier Model Board no longer has any explicit power, and can only advise the existing GovOps agency, and the attorney general). Not commenting yet on what this means as an answer to your question.
So Newsome would control 4 out of 8 of the votes, until this election occurs?
I wonder what his policies are? :thinking:
(Among the Presidential candidates, I liked RFK’s position best. When asked, off the top of his head, he jumps right into extinction risks, totalitarian control of society, and the need for international treaties for AI and bioweapons. I really love how he lumps “bioweapons and AI” as a natural category. It is a natural category.
But RFK dropped out, and even if he hadn’t dropped out it was pretty clear that he had no chance of winning because most US voters seem to think being a hilariously awesome weirdo is bad, and it is somehow so bad that “everyone dying because AI killed us” is like… somehow more important than that badness? (Obviously I’m being facetious. US voters don’t seem to think. They scrupulously avoid seeming that way because only weirdos “seem to think”.))
I’m guessing the expiration date on the law isn’t in there at all, because cynicism predicts that nothing like it would be in there, because that’s not how large corrupt bureaucracies work.
(/me wonders aloud if she should stop calling large old bureaucracies corrupt-by-default in order to start sucking up to Newsome as part of a larger scheme to get onto that board somehow… but prolly not, right? I think my comparative advantage is probably “being performatively autistic in public” which is usually incompatible with acquiring or wielding democratic political power.)