If some of the more pessimistic projections about the timelines to TAI are realized, my efforts in this field will have no effect. It is going to take at least 30 years for dramatically more capable humans to be able to meaningfully contribute to work in this field. Using Ajeya Cotra’s estimate of the timeline to TAI, which estimates a 50% chance of TAI by 2052, I estimate that there is at most a 50% probability that these efforts will have an impact, and a ~25% chance that they will have a large impact.
Those odds are good enough for me.
How low would the odds have to be before you would switch to doing something else? Would you continue with your current plan if the odds were 20-10 instead of 50-25?
I think if the odds were below 10% I would probably switch. Other than faster-than-expected progress in AI, the biggest thing I’m worried about is iterated embryo selection taking too long. That seems like the only technology capable of creating truly superlative humans capable of making a significant impact before TAI is created.
This is a really good question. I’m not sure I have a satisfying answer to this other than to say that awareness of the dangers of both nuclear weapons and computers has been disproportionately high among extremely smart people. John Von Neumann literally woke up from a dream in 1945 and dictated to his wife the outcome of both the Manhattan Project and the more general project of computation.
One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari:
“… we are creating … a monster whose influence is going to change history … this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country.
The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create …”
He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills.
In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
Or Alan Turing around the same time:
“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.”
Another one from him:
“Let us return for a moment to Lady Lovelace’s objection, which stated that the machine can only do what we tell it to do. One could say that a man can “inject” an idea into the machine, and that it will respond to a certain extent and then drop into quiescence, like a piano string struck by a hammer. Another simile would be an atomic pile of less than critical size: an injected idea is to correspond to a neutron entering the pile from without. Each such neutron will cause a certain disturbance which eventually dies away. If, however, the size of the pile is sufficiently increased, the disturbance caused by such an incoming neutron will very likely go on and on increasing until the whole pile is destroyed. Is there a corresponding phenomenon for minds, and is there one for machines? There does seem to be one for the human mind. The majority of them seem to be “sub critical,” i.e. to correspond in this analogy to piles of sub-critical size. An idea presented to such a mind will on average give rise to less than one idea in reply. A smallish proportion are supercritical. An idea presented to such a mind may give rise to a whole “theory” consisting of secondary, tertiary and more remote ideas. Animals’ minds seem to be very definitely sub-critical. Adhering to this analogy we ask, “Can a machine be made to be super-critical?”
Granted, these are just anecdotes. And let it be noted that Von Neumann and Turing both went on to make significant progress in their respective fields despite these concerns. My current theory is that yes, they are more likely to both recognize the danger of AI and do something about it. But that could be wrong. I will have to think more about this.
I’m not sure about the exact threshold. If the odds were below 10% I think that would be enough for me to switch to AI.
There is one other way in which I think a career in genetics could translate into a career in existential risk mitigation: through reducing the likelihood of engineered pandemics. One of the key technologies that holds incredible potential for good and for harm is genome synthesis. Given the recent rates of cost decline, I worry that someone might be able to re-create super smallpox or something before we even get to TAI. A career in genetics would put me closer to that technology, so maybe I could help design systems to prevent that particular type of disaster.
Thanks for this post!
How low would the odds have to be before you would switch to doing something else? Would you continue with your current plan if the odds were 20-10 instead of 50-25?
I think if the odds were below 10% I would probably switch. Other than faster-than-expected progress in AI, the biggest thing I’m worried about is iterated embryo selection taking too long. That seems like the only technology capable of creating truly superlative humans capable of making a significant impact before TAI is created.
Do you think such humans would have a high probability of working on TAI alignment, compared to working on actually making TAI?
This is a really good question. I’m not sure I have a satisfying answer to this other than to say that awareness of the dangers of both nuclear weapons and computers has been disproportionately high among extremely smart people. John Von Neumann literally woke up from a dream in 1945 and dictated to his wife the outcome of both the Manhattan Project and the more general project of computation.
Or Alan Turing around the same time:
Another one from him:
Granted, these are just anecdotes. And let it be noted that Von Neumann and Turing both went on to make significant progress in their respective fields despite these concerns. My current theory is that yes, they are more likely to both recognize the danger of AI and do something about it. But that could be wrong. I will have to think more about this.
I’m not sure about the exact threshold. If the odds were below 10% I think that would be enough for me to switch to AI.
There is one other way in which I think a career in genetics could translate into a career in existential risk mitigation: through reducing the likelihood of engineered pandemics. One of the key technologies that holds incredible potential for good and for harm is genome synthesis. Given the recent rates of cost decline, I worry that someone might be able to re-create super smallpox or something before we even get to TAI. A career in genetics would put me closer to that technology, so maybe I could help design systems to prevent that particular type of disaster.