Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response—from the whole of civilization? just from the intelligentsia? It’s unclear to me if you think you already have a solution.
You’re also saying that focus on AI safety is a mistake, compared with focus on this larger recurring process, of dangerous new technologies emerging thanks to the process of discovery.
There are in fact good arguments that AI is now pivotal to the whole process and also to its resolution. However, I would first like to hear what your own recommendations are, before presenting an AI-centric perspective.
Thanks much for your engagement Mitchell, appreciated.
Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response
Yes, to quibble just a bit, not just self sustaining, but also accelerating. The way I often put it is that we need to adapt to the new environment created by the success of the knowledge explosion. I just put up an article on the forum which explains further:
from the whole of civilization? just from the intelligentsia
As I imagine it, the needed adaptation would start with intellectual elites, but eventually some critical mass of the broader society would have to agree, to some degree or another. I’ve been writing about his for years now, and can’t actually provide any evidence that intellectual elites can lead on this, but who else?
It’s unclear to me if you think you already have a solution.
I don’t have a ten point plan or anything, just trying to encourage this conversation where ever I go. Success for me would be hundreds of intelligent well educated people exploring the topic in earnest together. That is happening to some degree already, but not with the laser focus on the knowledge explosion that I would prefer.
You’re also saying that focus on AI safety is a mistake...
I see AI discussions as a distraction, as an addressing of symptoms, rather than addressing the source of X risks. If 75% of the time we were discussing the source of X risk, I wouldn’t object to 25% addressing particular symptoms.
I’m attempting to apply common sense. If one has puddles all around the house every time it rains, the focus should be on fixing the hole in the roof. Otherwise one spends the rest of one’s life mopping up the puddles.
There are in fact good arguments that AI is now pivotal to the whole process and also to its resolution.
I don’t doubt AI can make a contribution in some areas, no argument there. But I don’t see any technology as being pivotal. I see the human condition as being pivotal.
I’m attempting to think holistically, and consider man and machine as a single operation, with the success of that operation being dependent upon the weakest link, which I propose to be us. Knowledge development races ahead at an ever accelerating rate, while human maturity inches along at an incremental rate, if that. Thus, the gap between the two is ever widening.
Please proceed to engage from whatever perspective you find useful. What I hope to be part of is a long deliberate process of challenge and counter challenge which helps us inch a little closer to some useful truth.
We believe AI is pivotal because we think it’s going to surpass human intelligence soon. So it’s not just another technology, it’s our successor.
The original plan of MIRI, the AI research institute somewhat associated with this website, was to identify a value system and a software architecture for AI, that would still be human-friendly, even after it bootstrapped itself to a level completely beyond human control or understanding, becoming the metaphorical “operating system” in charge of all life on Earth.
More recently, given the rapidity of advances in the raw power of AI, they have decided that there just isn’t time to solve these design problems, before some AI lab somewhere unwittingly hatches a superintelligent AI system that steamrolls the human race, not out of malice, but simply because it has goals that aren’t sufficiently finetuned to respect human life, liberty, or happiness.
Instead, their current aim is to buy time for humanity, by using early superintelligent AI, to neutralize all other dangerous AI projects, and establish a temporary regime in which civilization can deliberate on what to do with the incredible promise and peril of AI and related technologies.
There is therefore some similarity with your own idea to slow things down, but in this scenario, it is to be done by force, and by using the dangerous technology of superintelligent AI, when it first appears. Continuing the operating system metaphor, this amounts to putting AI-augmented civilization into a “safe mode” before it can do anything too destructive.
This suggests a model of the future, in which there is a kind of temporary world government, equipped with a superintelligent AI that monitors everything everywhere, and which steps in to sabotage any unapproved technology that threatens to create unfriendly superintelligence. Ideally, this period lasts as long as it takes, for humanity’s wise ones to figure out how to make fully autonomous superintelligence, something that we can safely coexist with. At that point the temporary world government can be permanently replaced with that self-governing planetary operating system.
You may be wondering, why rely on AI to restrain AI? Why not just have e.g. the UN Security Council declare that AI research worldwide will be frozen indefinitely, and use the existing tools of human governance to enforce that? The problem is that technological culture is decentralized and self-enhancing. In the short term, we might throttle the development of deep learning AI by restricting access to TPU chips worldwide. But you can also run the algorithms on sufficiently large networks of ordinary computers. And ultimately, you even have to worry about things like superintelligence achieved via neuron-hacking, polymeric nanocomputers, and so forth.
The premise is that the world is too out of control to stop everyone in the entire world from ever crossing the dangerous threshold. So instead, one must work towards an outcome whereby, the first ones across the threshold will use that power to slow things down for everyone else, while responsibly trying to figure out how to safely integrate that power into our world.
OK, that’s a glimpse of how some people are thinking. AI is seen as the crux of everything, because it is at the hub of everything: it can control other technologies, it can discover new technologies, it can even replace us as the chief decision-making entity in the world. And it’s really “AGI” (artificial general intelligence), and especially AGI that is more intelligent than human, which is the focus of all this concern, “Narrow AI” that just drives cars or recognizes faces has its own safety issues, but isn’t as all-encompassing in its implications.
Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response—from the whole of civilization? just from the intelligentsia? It’s unclear to me if you think you already have a solution.
You’re also saying that focus on AI safety is a mistake, compared with focus on this larger recurring process, of dangerous new technologies emerging thanks to the process of discovery.
There are in fact good arguments that AI is now pivotal to the whole process and also to its resolution. However, I would first like to hear what your own recommendations are, before presenting an AI-centric perspective.
Thanks much for your engagement Mitchell, appreciated.
Yes, to quibble just a bit, not just self sustaining, but also accelerating. The way I often put it is that we need to adapt to the new environment created by the success of the knowledge explosion. I just put up an article on the forum which explains further:
https://www.lesswrong.com/posts/nE4fu7XHc93P9Bj75/our-relationship-with-knowledge
As I imagine it, the needed adaptation would start with intellectual elites, but eventually some critical mass of the broader society would have to agree, to some degree or another. I’ve been writing about his for years now, and can’t actually provide any evidence that intellectual elites can lead on this, but who else?
I don’t have a ten point plan or anything, just trying to encourage this conversation where ever I go. Success for me would be hundreds of intelligent well educated people exploring the topic in earnest together. That is happening to some degree already, but not with the laser focus on the knowledge explosion that I would prefer.
I see AI discussions as a distraction, as an addressing of symptoms, rather than addressing the source of X risks. If 75% of the time we were discussing the source of X risk, I wouldn’t object to 25% addressing particular symptoms.
I’m attempting to apply common sense. If one has puddles all around the house every time it rains, the focus should be on fixing the hole in the roof. Otherwise one spends the rest of one’s life mopping up the puddles.
I don’t doubt AI can make a contribution in some areas, no argument there. But I don’t see any technology as being pivotal. I see the human condition as being pivotal.
I’m attempting to think holistically, and consider man and machine as a single operation, with the success of that operation being dependent upon the weakest link, which I propose to be us. Knowledge development races ahead at an ever accelerating rate, while human maturity inches along at an incremental rate, if that. Thus, the gap between the two is ever widening.
Please proceed to engage from whatever perspective you find useful. What I hope to be part of is a long deliberate process of challenge and counter challenge which helps us inch a little closer to some useful truth.
Thanks again!
We believe AI is pivotal because we think it’s going to surpass human intelligence soon. So it’s not just another technology, it’s our successor.
The original plan of MIRI, the AI research institute somewhat associated with this website, was to identify a value system and a software architecture for AI, that would still be human-friendly, even after it bootstrapped itself to a level completely beyond human control or understanding, becoming the metaphorical “operating system” in charge of all life on Earth.
More recently, given the rapidity of advances in the raw power of AI, they have decided that there just isn’t time to solve these design problems, before some AI lab somewhere unwittingly hatches a superintelligent AI system that steamrolls the human race, not out of malice, but simply because it has goals that aren’t sufficiently finetuned to respect human life, liberty, or happiness.
Instead, their current aim is to buy time for humanity, by using early superintelligent AI, to neutralize all other dangerous AI projects, and establish a temporary regime in which civilization can deliberate on what to do with the incredible promise and peril of AI and related technologies.
There is therefore some similarity with your own idea to slow things down, but in this scenario, it is to be done by force, and by using the dangerous technology of superintelligent AI, when it first appears. Continuing the operating system metaphor, this amounts to putting AI-augmented civilization into a “safe mode” before it can do anything too destructive.
This suggests a model of the future, in which there is a kind of temporary world government, equipped with a superintelligent AI that monitors everything everywhere, and which steps in to sabotage any unapproved technology that threatens to create unfriendly superintelligence. Ideally, this period lasts as long as it takes, for humanity’s wise ones to figure out how to make fully autonomous superintelligence, something that we can safely coexist with. At that point the temporary world government can be permanently replaced with that self-governing planetary operating system.
You may be wondering, why rely on AI to restrain AI? Why not just have e.g. the UN Security Council declare that AI research worldwide will be frozen indefinitely, and use the existing tools of human governance to enforce that? The problem is that technological culture is decentralized and self-enhancing. In the short term, we might throttle the development of deep learning AI by restricting access to TPU chips worldwide. But you can also run the algorithms on sufficiently large networks of ordinary computers. And ultimately, you even have to worry about things like superintelligence achieved via neuron-hacking, polymeric nanocomputers, and so forth.
The premise is that the world is too out of control to stop everyone in the entire world from ever crossing the dangerous threshold. So instead, one must work towards an outcome whereby, the first ones across the threshold will use that power to slow things down for everyone else, while responsibly trying to figure out how to safely integrate that power into our world.
OK, that’s a glimpse of how some people are thinking. AI is seen as the crux of everything, because it is at the hub of everything: it can control other technologies, it can discover new technologies, it can even replace us as the chief decision-making entity in the world. And it’s really “AGI” (artificial general intelligence), and especially AGI that is more intelligent than human, which is the focus of all this concern, “Narrow AI” that just drives cars or recognizes faces has its own safety issues, but isn’t as all-encompassing in its implications.