Before I delve into the discussion on AI, I would like to take a moment to reflect on a historical event that showcases the power of interdisciplinary collaboration: the Manhattan Project. During the height of World War II, thousands of scientists, engineers, and policymakers from diverse backgrounds came together under a cloak of secrecy, driven by a singular mission—to develop the world’s first atomic bomb.
While the consequences of this invention have been the subject of much justifiable debate, the Manhattan Project does serve as a powerful reminder of the magnitude of what humanity can achieve when we unite our collective talents and resources to tackle complex challenges.
As we navigate the ever-changing landscape of human civilization, we find ourselves amid an AI revolution that is reshaping our world. With systems like ChatGPT gaining extraordinary capabilities, we must contemplate how to balance artificial intelligence’s potential benefits and risks.
Once again, the comparison to the Manhattan Project is apropos as some of our brightest minds have repeatedly cautioned us that AI could pose a much bigger threat to humanity than nuclear weapons. From Stephen Hawking to Nick Bostrom to the recent open letter from the Future of Life Institute, many very intelligent people have warned us, as a global society, to proceed with caution for decades.
A Call for an International, Interdisciplinary Project for AI
There has been a lot of debate about how to proceed (or whether even to proceed) with AI development. But there has been little in the way of actionable recommendations that policy and business leaders can feasibly support and implement. In my judgment, advocating that AI should simply not be pursued is not a realistically viable plan. I agree that “shutting it all down” is preferable to taking an existential risk. I just don’t think it’s realistic that such a plan will be implemented. So I worry that advocating for it as the only option is forgoing an opportunity to call for less optimal but more realistic options that could produce positive outcomes.
So, despite my incredible lack of qualifications, I’d like to propose an alternative plan to address the AI challenges we collectively face. The only thing I’m certain of regarding this proposal is that it is insufficient, naive, and full of holes. But maybe it will inspire more capable minds to propose a better plan that we can realistically encourage global political and business leaders to support.
We require an international, interdisciplinary collaboration focused on developing the necessary scientific theories and regulatory frameworks to govern the development of beneficial AI.
Such a collaboration would have the scale and urgency of the Manhattan Project but with a sense of international collaboration and a humanitarian focus more reminiscent of the Human Genome Project. This global endeavor would assemble the best and brightest minds in AI, cognitive science, physics, and social sciences, as well as policymakers and representatives from around the world. Additionally, leading world governments would supply the resources and budgets commensurate to the existential importance of the mission. By pooling our collective wisdom and resources, we could endeavor to create a foundation for developing and integrating safe AI into our societies and economies.
This ambitious collaboration could potentially focus on:
Developing a unified theory of general intelligence: By combining insights from neuroscience, physics, and AI research, we could develop a unified scientific theory of general intelligence that explains how complex systems (biologically derived or engineered) can produce goal-directed behaviors. It would need to both explain the mechanism of how our own brains work and illuminate the underlying laws that govern any intelligent, goal-directed system. In the same manner that relativity influences nuclear science and gene theory influences biotechnologies, such an accomplishment would perhaps be the most powerful contribution to engineering safe and ethical intelligent systems.
Addressing relevant philosophical questions: With a strong scientific understanding, we can collectively delve into the philosophical questions about life, consciousness, and sentience with renewed vigor. Developing insights here will help us navigate the ethical quandaries of AI technologies, helping to ensure their responsible integration into our society.
Crafting international policy and regulatory recommendations: Developing globally-coordinated policy recommendations for world governments could harmonize AI development across nations and ensure that AI’s benefits and potential risks are fairly distributed among Earth’s inhabitants.
Encouraging public participation: Engaging the international community and people from diverse backgrounds in discussions about AI development will ensure that their concerns and values are taken into account, fostering a sense of shared responsibility and trust in AI technologies.
This collaborative initiative would need to operate with a sense of urgency and shared purpose, recognizing the potential global consequences of uncontrolled AI development. By uniting the brightest minds and key stakeholders from around the world, we could create an unprecedented global effort to address the complex challenges posed by AI, ensuring a safer and more ethical future for all.
How Might This Actually Work?
Given the current geopolitical landscape, I understand that many might consider such a global collaboration improbable. However, there are likely ways to pursue this initiative that could overcome current challenges and result in positive outcomes aligned with national interests while protecting humanity’s overall flourishing.
For a collaboration such as this to be effective, it would require significant forethought in planning. This would include establishing clear objectives and milestones, addressing potential conflicts of interest, encouraging cultural diversity and inclusion, developing policies for equitable knowledge/technology transfers, and developing a long-term plan for global AI governance that was realistic and enforceable.
Governance and Organization: An initiative of this scale would require organization and oversight by a globally respected institution. While the United Nations might be the most recognizable option, less politically charged alternatives like the International Science Council could also be considered. Another possibility is creating a novel governing body composed of highly respected scientists from different countries, ensuring equal representation (though selecting this board would undoubtedly spark debate).
Incentives: To entice the participation of leading world governments as well as the best and brightest minds, the incentives need to be significant. For instance, how do you get China and the US to agree to both provide billions of dollars in funding so that the effort can afford to attract the best and brightest minds away from industry?
Pay to Play: Imagine a scenario where the world’s best minds collaborate to develop key insights for unlocking the most powerful and impactful technologies ever conceived. These insights, produced by a well-resourced global team, would be protected with military-grade data security protocols. While the discoveries might eventually be published for peer review (in the spirit of scientific progress), only early participants in the collaboration would have immediate access to operationalize the insights. If appropriately structured, this could serve as a powerful inducement for governments, private companies, and individual scientists and technologists.
Sense of Duty: Scientists working on the Manhattan Project were strongly motivated by their sense of duty towards the war effort. Could a global initiative for developing safe AI, overseen by widely respected scientific luminaries, potentially replicate a modern sense of duty among our generation’s best and brightest? Based on conversations with the scientists and researchers I know personally, I believe that a properly constructed initiative, devoid of nationalistic tendencies, would indeed command such a response.
Funding & Resources:
Nations: Funding should come from major world powers (US, China, Europe, UK, Japan, India). While these countries would commit funds, they would not have governing authority or a role in appointing the governing body. However, the governing body would include a respected scientist from each participating country.
Corporations: Much of today’s leading AI research originates from the private sector. To participate in the resulting IP and technology transfers from the initiative’s work, these corporations would need to contribute resources (human and computational) and adhere to the initiative’s rules and guidelines for AI research (which would likely restrict certain forms of AI research and development).
Universities & Research Institutions: Participating nations would be required to offer incentives for universities and research institutions to contribute their brightest minds to the initiative by protecting tenure and providing other job securities. However, many of these organizations may not need much convincing, as they would likely be eager to participate in one of the most significant scientific and philosophical undertakings in human history.
Again, my proposal is naive and incomplete. Bright minds will immediately see many flaws in the feasibility of such a plan (as I myself can see). I only aim to illustrate that such a plan could be conceived and to encourage more capable and experienced minds to propose an actionable alternative.
Prospects for a Brighter Future
The cynicism and negativity of our public discourse have never been higher, and the global political and socio-economic order seems to be on the brink of dark times. I believe that educated individuals and citizens of an increasingly interdependent global civilization have a responsibility to engage in these critical discussions. That said, if we wish to contribute our voices to the debate, do we not have a responsibility to advocate for a responsible, collaborative, and realistic approach to resolving that debate?
As we embark on this AI journey, let us harness the power of interdisciplinary collaboration to ensure the responsible development and implementation of artificial intelligence. By doing so, we can create AI technologies that align with human values and contribute to a brighter future for all.
Then again, perhaps all of this is a silly pipedream. If that is the case, I still would prefer to be one of those dreamers who advocated for us to at least attempt tomake AI the next giant leap for humankind rather than the last petty squabble.
An International Manhattan Project for Artificial Intelligence
The Manhattan Project—A Brief History
Before I delve into the discussion on AI, I would like to take a moment to reflect on a historical event that showcases the power of interdisciplinary collaboration: the Manhattan Project. During the height of World War II, thousands of scientists, engineers, and policymakers from diverse backgrounds came together under a cloak of secrecy, driven by a singular mission—to develop the world’s first atomic bomb.
While the consequences of this invention have been the subject of much justifiable debate, the Manhattan Project does serve as a powerful reminder of the magnitude of what humanity can achieve when we unite our collective talents and resources to tackle complex challenges.
As we navigate the ever-changing landscape of human civilization, we find ourselves amid an AI revolution that is reshaping our world. With systems like ChatGPT gaining extraordinary capabilities, we must contemplate how to balance artificial intelligence’s potential benefits and risks.
Once again, the comparison to the Manhattan Project is apropos as some of our brightest minds have repeatedly cautioned us that AI could pose a much bigger threat to humanity than nuclear weapons. From Stephen Hawking to Nick Bostrom to the recent open letter from the Future of Life Institute, many very intelligent people have warned us, as a global society, to proceed with caution for decades.
A Call for an International, Interdisciplinary Project for AI
There has been a lot of debate about how to proceed (or whether even to proceed) with AI development. But there has been little in the way of actionable recommendations that policy and business leaders can feasibly support and implement. In my judgment, advocating that AI should simply not be pursued is not a realistically viable plan. I agree that “shutting it all down” is preferable to taking an existential risk. I just don’t think it’s realistic that such a plan will be implemented. So I worry that advocating for it as the only option is forgoing an opportunity to call for less optimal but more realistic options that could produce positive outcomes.
So, despite my incredible lack of qualifications, I’d like to propose an alternative plan to address the AI challenges we collectively face. The only thing I’m certain of regarding this proposal is that it is insufficient, naive, and full of holes. But maybe it will inspire more capable minds to propose a better plan that we can realistically encourage global political and business leaders to support.
We require an international, interdisciplinary collaboration focused on developing the necessary scientific theories and regulatory frameworks to govern the development of beneficial AI.
Such a collaboration would have the scale and urgency of the Manhattan Project but with a sense of international collaboration and a humanitarian focus more reminiscent of the Human Genome Project. This global endeavor would assemble the best and brightest minds in AI, cognitive science, physics, and social sciences, as well as policymakers and representatives from around the world. Additionally, leading world governments would supply the resources and budgets commensurate to the existential importance of the mission. By pooling our collective wisdom and resources, we could endeavor to create a foundation for developing and integrating safe AI into our societies and economies.
This ambitious collaboration could potentially focus on:
Developing a unified theory of general intelligence: By combining insights from neuroscience, physics, and AI research, we could develop a unified scientific theory of general intelligence that explains how complex systems (biologically derived or engineered) can produce goal-directed behaviors. It would need to both explain the mechanism of how our own brains work and illuminate the underlying laws that govern any intelligent, goal-directed system. In the same manner that relativity influences nuclear science and gene theory influences biotechnologies, such an accomplishment would perhaps be the most powerful contribution to engineering safe and ethical intelligent systems.
Addressing relevant philosophical questions: With a strong scientific understanding, we can collectively delve into the philosophical questions about life, consciousness, and sentience with renewed vigor. Developing insights here will help us navigate the ethical quandaries of AI technologies, helping to ensure their responsible integration into our society.
Crafting international policy and regulatory recommendations: Developing globally-coordinated policy recommendations for world governments could harmonize AI development across nations and ensure that AI’s benefits and potential risks are fairly distributed among Earth’s inhabitants.
Encouraging public participation: Engaging the international community and people from diverse backgrounds in discussions about AI development will ensure that their concerns and values are taken into account, fostering a sense of shared responsibility and trust in AI technologies.
This collaborative initiative would need to operate with a sense of urgency and shared purpose, recognizing the potential global consequences of uncontrolled AI development. By uniting the brightest minds and key stakeholders from around the world, we could create an unprecedented global effort to address the complex challenges posed by AI, ensuring a safer and more ethical future for all.
How Might This Actually Work?
Given the current geopolitical landscape, I understand that many might consider such a global collaboration improbable. However, there are likely ways to pursue this initiative that could overcome current challenges and result in positive outcomes aligned with national interests while protecting humanity’s overall flourishing.
For a collaboration such as this to be effective, it would require significant forethought in planning. This would include establishing clear objectives and milestones, addressing potential conflicts of interest, encouraging cultural diversity and inclusion, developing policies for equitable knowledge/technology transfers, and developing a long-term plan for global AI governance that was realistic and enforceable.
Governance and Organization: An initiative of this scale would require organization and oversight by a globally respected institution. While the United Nations might be the most recognizable option, less politically charged alternatives like the International Science Council could also be considered. Another possibility is creating a novel governing body composed of highly respected scientists from different countries, ensuring equal representation (though selecting this board would undoubtedly spark debate).
Incentives: To entice the participation of leading world governments as well as the best and brightest minds, the incentives need to be significant. For instance, how do you get China and the US to agree to both provide billions of dollars in funding so that the effort can afford to attract the best and brightest minds away from industry?
Pay to Play: Imagine a scenario where the world’s best minds collaborate to develop key insights for unlocking the most powerful and impactful technologies ever conceived. These insights, produced by a well-resourced global team, would be protected with military-grade data security protocols. While the discoveries might eventually be published for peer review (in the spirit of scientific progress), only early participants in the collaboration would have immediate access to operationalize the insights. If appropriately structured, this could serve as a powerful inducement for governments, private companies, and individual scientists and technologists.
Sense of Duty: Scientists working on the Manhattan Project were strongly motivated by their sense of duty towards the war effort. Could a global initiative for developing safe AI, overseen by widely respected scientific luminaries, potentially replicate a modern sense of duty among our generation’s best and brightest? Based on conversations with the scientists and researchers I know personally, I believe that a properly constructed initiative, devoid of nationalistic tendencies, would indeed command such a response.
Funding & Resources:
Nations: Funding should come from major world powers (US, China, Europe, UK, Japan, India). While these countries would commit funds, they would not have governing authority or a role in appointing the governing body. However, the governing body would include a respected scientist from each participating country.
Corporations: Much of today’s leading AI research originates from the private sector. To participate in the resulting IP and technology transfers from the initiative’s work, these corporations would need to contribute resources (human and computational) and adhere to the initiative’s rules and guidelines for AI research (which would likely restrict certain forms of AI research and development).
Universities & Research Institutions: Participating nations would be required to offer incentives for universities and research institutions to contribute their brightest minds to the initiative by protecting tenure and providing other job securities. However, many of these organizations may not need much convincing, as they would likely be eager to participate in one of the most significant scientific and philosophical undertakings in human history.
Again, my proposal is naive and incomplete. Bright minds will immediately see many flaws in the feasibility of such a plan (as I myself can see). I only aim to illustrate that such a plan could be conceived and to encourage more capable and experienced minds to propose an actionable alternative.
Prospects for a Brighter Future
The cynicism and negativity of our public discourse have never been higher, and the global political and socio-economic order seems to be on the brink of dark times. I believe that educated individuals and citizens of an increasingly interdependent global civilization have a responsibility to engage in these critical discussions. That said, if we wish to contribute our voices to the debate, do we not have a responsibility to advocate for a responsible, collaborative, and realistic approach to resolving that debate?
As we embark on this AI journey, let us harness the power of interdisciplinary collaboration to ensure the responsible development and implementation of artificial intelligence. By doing so, we can create AI technologies that align with human values and contribute to a brighter future for all.
Then again, perhaps all of this is a silly pipedream. If that is the case, I still would prefer to be one of those dreamers who advocated for us to at least attempt to make AI the next giant leap for humankind rather than the last petty squabble.