You want to shut down AI to give more time… for what? Let’s call the process you want to give more time to X. You want X to go faster than AI. It seems the relevant quantity is the ratio between the speed of X and the speed of AI. If X could be clarified, it would make it more clear how efficient it is to increase this ratio by speeding up X versus by slowing down AI. I don’t see in this post any idea of what X is, or any feasibility estimate of how easy it is to speed up X versus slowing down AI.
One thing we can hope for, if we get a little more time rather than a lot more time, is that we might get various forms of human cognitive enhancement working, and these smarter humans can make more rapid progress on AI alignment.
Glad there is a specific idea there. What are the main approaches for this? There’s Neuralink and there’s gene editing, among other things. It seems MIRI may have access to technical talent that could speed up some of these projects.
If we manage to avoid extinction for a few centuries, cognitive capacities among the most capable people are likely to increase substantially merely through natural selection. Because our storehouse of potent knowledge is now so large and because of other factors (e.g., increased specialization in the labor market), it is easier than ever for people with high cognitive capacity to earn above-average incomes and to avoid or obtain cures for illnesses of themselves and their children. (The level of health care a person can obtain by consulting doctors and being willing to follow their recommendations will always lag behind the level that can be obtained by doing that and doing one’s best to create and refine a mental model of the illness.)
Yes, there is a process that has been causing the more highly-educated and the more highly-paid to have fewer children than average, but natural selection will probably cancel out the effect of that process over the next few centuries: I can’t think of any human traits subject to more selection pressure than the traits that make it more likely the individual will choose to have children even when effective contraception is cheap and available. Also, declining birth rates are causing big problems for the economies and military readiness of many countries, and governments might in the future respond to those problems by banning contraception.
Personally, I’d guess that we could see a lot of improvement by clever uses of safe AIs. Even if we stopped improving on LLMs today, I think we have a long way to go to make good use of current systems.
Just because there are potentially risky AIs down the road doesn’t mean we should ignore the productive use of safe AIs.
You want to shut down AI to give more time… for what? Let’s call the process you want to give more time to X. You want X to go faster than AI. It seems the relevant quantity is the ratio between the speed of X and the speed of AI. If X could be clarified, it would make it more clear how efficient it is to increase this ratio by speeding up X versus by slowing down AI. I don’t see in this post any idea of what X is, or any feasibility estimate of how easy it is to speed up X versus slowing down AI.
Quoting from Gretta:
Glad there is a specific idea there. What are the main approaches for this? There’s Neuralink and there’s gene editing, among other things. It seems MIRI may have access to technical talent that could speed up some of these projects.
related: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
If we manage to avoid extinction for a few centuries, cognitive capacities among the most capable people are likely to increase substantially merely through natural selection. Because our storehouse of potent knowledge is now so large and because of other factors (e.g., increased specialization in the labor market), it is easier than ever for people with high cognitive capacity to earn above-average incomes and to avoid or obtain cures for illnesses of themselves and their children. (The level of health care a person can obtain by consulting doctors and being willing to follow their recommendations will always lag behind the level that can be obtained by doing that and doing one’s best to create and refine a mental model of the illness.)
Yes, there is a process that has been causing the more highly-educated and the more highly-paid to have fewer children than average, but natural selection will probably cancel out the effect of that process over the next few centuries: I can’t think of any human traits subject to more selection pressure than the traits that make it more likely the individual will choose to have children even when effective contraception is cheap and available. Also, declining birth rates are causing big problems for the economies and military readiness of many countries, and governments might in the future respond to those problems by banning contraception.
Minor flag, but I’ve thought about some similar ideas, and here’s one summary:
https://forum.effectivealtruism.org/posts/YpaQcARgLHFNBgyGa/prioritization-research-for-advancing-wisdom-and
Personally, I’d guess that we could see a lot of improvement by clever uses of safe AIs. Even if we stopped improving on LLMs today, I think we have a long way to go to make good use of current systems.
Just because there are potentially risky AIs down the road doesn’t mean we should ignore the productive use of safe AIs.