I hope that the voluminous discussion on exactly how bad each of the big AI labs are doesn’t distract readers from what I consider the main chances: getting all the AI labs banned (eventually) and convincing talented young people not to put in the years of effort needed to prepare themselves to do technical AI work.
I’m curious if your argument, distilled, is: fewer people skilled in technical AI work is better? Such a claim must be examined closely! Think of it from a systems dynamics point of view. We must look at more than just one relationship. (I personally try to press people to share some kind of model that isn’t presented only in words.)
Yes, I am pretty sure that the fewer additional people skilled in technical AI work, the better. In the very unlikely event that before the end, someone or some group actually comes up with a reliable plan for how to align an ASI, we certainly want a sizable number of people able to understand the plan relatively quickly (i.e., without first needing to prepare themselves through study for a year), but IMHO we already have that.
“The AI project” (the community of people trying to make AIs that are as capable as possible) probably needs many thousands of additional people with technical training to achieve its goal. (And if the AI project doesn’t need those additional people, that is bad news because it probably means we are all going to die sooner rather than later.) Only a few dozen or a few hundred researchers (and engineers) will probably make substantial contributions toward the goal, but neither the apprentice researchers themselves, their instructors or their employers can tell which researchers will ever make a substantial contribution, so the only way for the project to get an adequate supply of researchers is to train and employ many thousands. The project would prefer to employ even more than that.
I am pretty sure it is more important to restrict the supply of researchers available to the AI project than it is to have more researchers who describe themselves as alignment researchers. It’s not flat impossible that the AI-alignment project will bear fruit before the end, but is it very unlikely. In contrast, if not stopped somehow (e.g., by the arrival of helpful space aliens or some other miracle) the AI project will probably succeed at its goal. Most people pursuing careers in alignment research are probably doing more harm than good because the AI project tends to be able to use any results they come up with. MIRI is an exception to the general rule, but MIRI has chosen to stop its alignment research program on the grounds that it is hopeless.
Restricting the supply of researchers for the AI project by warning talented young people not to undergo the kinds of training needed by the AI project increases the length of time left before the AI project kills us all, which increases the chances of a miracle such as the arrival of the helpful space aliens. Also, causing our species to endure 10 years longer than it would otherwise endure is an intrinsic good even if it does not instrumentally lead to our long-term survival.
Here is an example of a systems dynamics diagram showing some of the key feedback loops I see. We could discuss various narratives around it and what to change (add, subtract, modify).
┌───── to the degree it is perceived as unsafe ◀──────────┐
│ ┌──── economic factors ◀─────────┐ │
│ + ▼ │ │
│ ┌───────┐ ┌───────────┐ │ │ ┌────────┐
│ │people │ │ effort to │ ┌───────┐ ┌─────────┐ │ AI │
▼ - │working│ + │make AI as │ + │ AI │ + │potential│ + │becomes │
├─────▶│ in │────▶│powerful as│─────▶│ power │───▶│ for │───▶│ too │
│ │general│ │ possible │ └───────┘ │unsafe AI│ │powerful│
│ │ AI │ └───────────┘ │ └─────────┘ └────────┘
│ └───────┘ │
│ │ net movement │ e.g. use AI to reason
│ + ▼ │ about AI safety
│ ┌────────┐ + ▼
│ │ people │ ┌────────┐ ┌─────────────┐ ┌──────────┐
│ + │working │ + │ effort │ + │understanding│ + │alignment │
└────▶│ in AI │────▶│for safe│─────▶│of AI safety │─────────────▶│ solved │
│ safety │ │ AI │ └─────────────┘ └──────────┘
└────────┘ └────────┘ │
+ ▲ │
└─── success begets interest ◀───┘
I find this style of thinking particularly constructive.
For any two nodes, you can see a visual relationship (or lack thereof) and ask “what influence do these have on each other and why?”.
The act of summarization cuts out chaff.
It is harder to fool yourself about the completeness of your analysis.
It is easier to get to core areas of confusion or disagreement with others.
Personally, I find verbal reasoning workable for “local” (pairwise) reasoning but quite constraining for systemic thinking.
If nothing else, I hope this example shows how easily key feedback loops get overlooked. How many of us claim to have… (a) some technical expertise in positive and negative feedback? (b) interest in Bayes nets? So why don’t we take the time to write out our diagrams? How can we do better?
P.S. There are major oversights in the diagram above, such as economic factors. This is not a limitation of the technique itself—it is a limitation of the space and effort I’ve put into it. I have many other such diagrams in the works.
I hope that the voluminous discussion on exactly how bad each of the big AI labs are doesn’t distract readers from what I consider the main chances: getting all the AI labs banned (eventually) and convincing talented young people not to put in the years of effort needed to prepare themselves to do technical AI work.
I’m curious if your argument, distilled, is: fewer people skilled in technical AI work is better? Such a claim must be examined closely! Think of it from a systems dynamics point of view. We must look at more than just one relationship. (I personally try to press people to share some kind of model that isn’t presented only in words.)
Yes, I am pretty sure that the fewer additional people skilled in technical AI work, the better. In the very unlikely event that before the end, someone or some group actually comes up with a reliable plan for how to align an ASI, we certainly want a sizable number of people able to understand the plan relatively quickly (i.e., without first needing to prepare themselves through study for a year), but IMHO we already have that.
“The AI project” (the community of people trying to make AIs that are as capable as possible) probably needs many thousands of additional people with technical training to achieve its goal. (And if the AI project doesn’t need those additional people, that is bad news because it probably means we are all going to die sooner rather than later.) Only a few dozen or a few hundred researchers (and engineers) will probably make substantial contributions toward the goal, but neither the apprentice researchers themselves, their instructors or their employers can tell which researchers will ever make a substantial contribution, so the only way for the project to get an adequate supply of researchers is to train and employ many thousands. The project would prefer to employ even more than that.
I am pretty sure it is more important to restrict the supply of researchers available to the AI project than it is to have more researchers who describe themselves as alignment researchers. It’s not flat impossible that the AI-alignment project will bear fruit before the end, but is it very unlikely. In contrast, if not stopped somehow (e.g., by the arrival of helpful space aliens or some other miracle) the AI project will probably succeed at its goal. Most people pursuing careers in alignment research are probably doing more harm than good because the AI project tends to be able to use any results they come up with. MIRI is an exception to the general rule, but MIRI has chosen to stop its alignment research program on the grounds that it is hopeless.
Restricting the supply of researchers for the AI project by warning talented young people not to undergo the kinds of training needed by the AI project increases the length of time left before the AI project kills us all, which increases the chances of a miracle such as the arrival of the helpful space aliens. Also, causing our species to endure 10 years longer than it would otherwise endure is an intrinsic good even if it does not instrumentally lead to our long-term survival.
Here is an example of a systems dynamics diagram showing some of the key feedback loops I see. We could discuss various narratives around it and what to change (add, subtract, modify).
I find this style of thinking particularly constructive.
For any two nodes, you can see a visual relationship (or lack thereof) and ask “what influence do these have on each other and why?”.
The act of summarization cuts out chaff.
It is harder to fool yourself about the completeness of your analysis.
It is easier to get to core areas of confusion or disagreement with others.
Personally, I find verbal reasoning workable for “local” (pairwise) reasoning but quite constraining for systemic thinking.
If nothing else, I hope this example shows how easily key feedback loops get overlooked. How many of us claim to have… (a) some technical expertise in positive and negative feedback? (b) interest in Bayes nets? So why don’t we take the time to write out our diagrams? How can we do better?
P.S. There are major oversights in the diagram above, such as economic factors. This is not a limitation of the technique itself—it is a limitation of the space and effort I’ve put into it. I have many other such diagrams in the works.