“Humanity doesn’t have control of even today’s AI, but it’s not just AI: climate risk, pandemic risk, geopolitical risk, nuclear risk—they’re all trending to x-risk, and we don’t have control of any of them. They’re all reflections of the same underlying reality: humanity is an infinitely strong infant, with exponentially growing power to imperil itself, but not yet the ability to think or act coherently in response. This is the true threat—we’re in existential danger because our power at scale is growing so much faster than our agency at scale.
This has always been our situation. When we look into the future of AI and see catastrophe, what we’re looking at is not loss of control, but the point at which the rising tide of our power makes our lack of control fatal.”
Wonderfully said. My path here was actually through other X risk, as well as looking for solutions to our current paradigm, where with different incentives, always desiring to gain the most we can, and suffering when we fail to meet these goals, someone will always have to suffer for someone else’s happiness. Under this, even an AI which worked perfectly for one person’s happiness would hurt others, and one working for everyone’s happiness together would still be hated at times by people forced to be held back by it from achieving all the happiness they could. I am working to be one brick on the long road to building an intelligence which can act for its future happiness efficiently and be composed of the unity of all feeling life in the world. We’re a long way from that, but I think progress in computer brain interfaces will be of use, and if anyone else is interested in this approach, of building up and connecting the human mind, please reach out!
“Humanity doesn’t have control of even today’s AI, but it’s not just AI: climate risk, pandemic risk, geopolitical risk, nuclear risk—they’re all trending to x-risk, and we don’t have control of any of them. They’re all reflections of the same underlying reality: humanity is an infinitely strong infant, with exponentially growing power to imperil itself, but not yet the ability to think or act coherently in response. This is the true threat—we’re in existential danger because our power at scale is growing so much faster than our agency at scale.
This has always been our situation. When we look into the future of AI and see catastrophe, what we’re looking at is not loss of control, but the point at which the rising tide of our power makes our lack of control fatal.”
Wonderfully said. My path here was actually through other X risk, as well as looking for solutions to our current paradigm, where with different incentives, always desiring to gain the most we can, and suffering when we fail to meet these goals, someone will always have to suffer for someone else’s happiness. Under this, even an AI which worked perfectly for one person’s happiness would hurt others, and one working for everyone’s happiness together would still be hated at times by people forced to be held back by it from achieving all the happiness they could. I am working to be one brick on the long road to building an intelligence which can act for its future happiness efficiently and be composed of the unity of all feeling life in the world. We’re a long way from that, but I think progress in computer brain interfaces will be of use, and if anyone else is interested in this approach, of building up and connecting the human mind, please reach out!