The number of years until the creation of powerful AI is a major input to our thinking about risk from AI and which approaches are most promising for mitigating that risk. While there are downsides to transformative AI arriving many years from now, rather than few years from now, most people seem to agree that it is safer for AI to arrive in 2060 than in 2030. Given this, there is a lot of discussion about what we can do to increase the number of years until we see such powerful systems. While existing proposals have their merits, none of them can ensure that AI will arrive later than 2030, much less 2060.
There is a policy that is guaranteed to increase the number of years between now and the arrival of transformative AI. The General Conference on Weights and Measures defines one second to be 9,192,631,770 cycles of the optical radiation emitted during a hyperfine transition in the ground state of a cesium 133 atom. Redefining the second to instead be 919,263,177 cycles of this radiation will increase the number of years between now and transformative AI by a factor of ten. The reason this policy works is the same reason that defining a time standard works–the microscopic behavior of atoms and photons is ultimately governed by the same physical laws as everything else, including computers, AI labs, and financial markets, and those laws are unaffected by our time standards. Thus fewer cycles of cesium radiation per year implies proportionately fewer other things happening per year.
Making such a change might not sound politically tractable, but there is already precedent for making radical changes to the definition of a second. Previously it was defined in terms of Earth’s solar orbit, and before that in terms of Earth’s rotation. These physical processes and their implementations as time standards bear little resemblance to the present-day quantum mechanical standard. In contrast, a change that preserves nearly the entire standard, including all significant figures in the relevant numerical definition, is straightforward.
One possible objection to this policy is that our time standards are not entirely causally disconnected from the rest of the world. For example, redefining the time standard might create a sense of urgency among AI labs and the people investing in them. It’s not hard to imagine that the leaders and researchers within companies advancing the state of the art in AI might increase their efforts after noticing it is taking ten times as long to generate the same amount of research. While this is a reasonable concern, it seems unlikely that AI labs can increase their rate of progress by a full order of magnitude. Why would they currently be leaving so much on the table if they were? Futhermore, there are similar effects that might push in the other direction. Once politicians and executives realize they will live to be hundreds of years old, they may take risks to the longterm future more seriously.
Still, it does seem that the policy might have undesirable side effects. Changing all of our textbooks, clocks, software, calendars, and habits is costly. One solution to this challenge is to change the standard either in secret or in a way that allows most people to continue using the old “unofficial” standard. After all, what matters is the actual number of years required to create AI, not the number of years as measured by some deprecated standard.
In conclusion, while there are many policies for increasing the number of years before the arrival of advanced artificial intelligence, until now, none of them has guaranteed a large increase in this number. This policy, if implemented promptly and thoughtfully, is essentially guaranteed to cause a large increase the number of years before we see systems capable of posing a serious risk to humanity.
A policy guaranteed to increase AI timelines
Link post
The number of years until the creation of powerful AI is a major input to our thinking about risk from AI and which approaches are most promising for mitigating that risk. While there are downsides to transformative AI arriving many years from now, rather than few years from now, most people seem to agree that it is safer for AI to arrive in 2060 than in 2030. Given this, there is a lot of discussion about what we can do to increase the number of years until we see such powerful systems. While existing proposals have their merits, none of them can ensure that AI will arrive later than 2030, much less 2060.
There is a policy that is guaranteed to increase the number of years between now and the arrival of transformative AI. The General Conference on Weights and Measures defines one second to be 9,192,631,770 cycles of the optical radiation emitted during a hyperfine transition in the ground state of a cesium 133 atom. Redefining the second to instead be 919,263,177 cycles of this radiation will increase the number of years between now and transformative AI by a factor of ten. The reason this policy works is the same reason that defining a time standard works–the microscopic behavior of atoms and photons is ultimately governed by the same physical laws as everything else, including computers, AI labs, and financial markets, and those laws are unaffected by our time standards. Thus fewer cycles of cesium radiation per year implies proportionately fewer other things happening per year.
Making such a change might not sound politically tractable, but there is already precedent for making radical changes to the definition of a second. Previously it was defined in terms of Earth’s solar orbit, and before that in terms of Earth’s rotation. These physical processes and their implementations as time standards bear little resemblance to the present-day quantum mechanical standard. In contrast, a change that preserves nearly the entire standard, including all significant figures in the relevant numerical definition, is straightforward.
One possible objection to this policy is that our time standards are not entirely causally disconnected from the rest of the world. For example, redefining the time standard might create a sense of urgency among AI labs and the people investing in them. It’s not hard to imagine that the leaders and researchers within companies advancing the state of the art in AI might increase their efforts after noticing it is taking ten times as long to generate the same amount of research. While this is a reasonable concern, it seems unlikely that AI labs can increase their rate of progress by a full order of magnitude. Why would they currently be leaving so much on the table if they were? Futhermore, there are similar effects that might push in the other direction. Once politicians and executives realize they will live to be hundreds of years old, they may take risks to the longterm future more seriously.
Still, it does seem that the policy might have undesirable side effects. Changing all of our textbooks, clocks, software, calendars, and habits is costly. One solution to this challenge is to change the standard either in secret or in a way that allows most people to continue using the old “unofficial” standard. After all, what matters is the actual number of years required to create AI, not the number of years as measured by some deprecated standard.
In conclusion, while there are many policies for increasing the number of years before the arrival of advanced artificial intelligence, until now, none of them has guaranteed a large increase in this number. This policy, if implemented promptly and thoughtfully, is essentially guaranteed to cause a large increase the number of years before we see systems capable of posing a serious risk to humanity.