Folk at the Singularity Institute and the Future of Humanity Institute agree that it would probably (but unstably in the face of further analysis) be better to have brain emulations before de novo AI from an existential risk perspective (a WBE-based singleton seems more likely to go right than an AI design optimized for ease of development rather than safety). I actually recently gave a talk at FHI about the use of WBE to manage collection action problems such as Robin Hanson’s “Burning the Cosmic Commons” and pressures to cut corners on safety of AI development, which I’ll be putting online soon. One of the projects being funded by the SIAI Challenge Grant ending tonight is an analysis of the relationship between AI and WBE for existential risks.
However, the conclusion that accelerating WBE (presumably via scanning or neuroscience, not speeding up Moore’s Law type trends in hardware) is the best marginal project for existential risk reduction is much less clear. Here are just a few of the relevant issues:
1) Are there investments best made far in advance with WBE or AI? It might be that the theory to build safe AIs cannot be rushed as much as institutions to manage WBEs, or it might be that WBE-regulating institutions require a buildup of political influence over decades.
2) The scanning and neuroscience knowledge needed to produce WBE may facilitate powerful AI well before WBE, as folk like Shane Legg suggest. In that case accelerating scanning would mean primarily earlier AI, with a shift towards neuromorphic designs.
3) How much advance warning will WBE and AI give, or rather what is our probability distribution over degrees of warning? The easier a transition is to see in advance, the more likely it will be addressed by those with weak incentives and relevant skills. Possibilities with less warning, and thus less opportunity for learning, may offer higher returns on the efforts of the unusually long-term oriented.
Folk at FHI have done some work to accelerate brain emulation, e.g. with the WBE Roadmap and workshop, but there is much discussion here about estimating the risks and benefits of various interventions that would go further or try to shape future use of the technology and awareness/responses to risks.
I actually recently gave a talk at FHI about the use of WBE to manage collection action problems such as Robin Hanson’s “Burning the Cosmic Commons” and pressures to cut corners on safety of AI development, which I’ll be putting online soon.
I would love to read this talk.
Do you have a blog or something?
It seems to me that the post offers consideration that lean one in the direction of focusing efforts on encouraging good WBE, and that considerations offered in this comment don’t much lean one back in the other direction. They mainly point to as yet unresolved uncertainties that might push us in many directions.
My main aim was to make clear the agreement about WBE being preferable to AI, and the difference between a tech being the most likely route to survival and it being the best marginal use of effort, not to put a large amount of effort into carefully giving and justifying estimates of all the relevant parameters in this comments thread rather than other venues (such as the aforementioned paper).
Folk at the Singularity Institute and the Future of Humanity Institute agree that it would probably (but unstably in the face of further analysis) be better to have brain emulations before de novo AI from an existential risk perspective (a WBE-based singleton seems more likely to go right than an AI design optimized for ease of development rather than safety). I actually recently gave a talk at FHI about the use of WBE to manage collection action problems such as Robin Hanson’s “Burning the Cosmic Commons” and pressures to cut corners on safety of AI development, which I’ll be putting online soon. One of the projects being funded by the SIAI Challenge Grant ending tonight is an analysis of the relationship between AI and WBE for existential risks.
However, the conclusion that accelerating WBE (presumably via scanning or neuroscience, not speeding up Moore’s Law type trends in hardware) is the best marginal project for existential risk reduction is much less clear. Here are just a few of the relevant issues:
1) Are there investments best made far in advance with WBE or AI? It might be that the theory to build safe AIs cannot be rushed as much as institutions to manage WBEs, or it might be that WBE-regulating institutions require a buildup of political influence over decades.
2) The scanning and neuroscience knowledge needed to produce WBE may facilitate powerful AI well before WBE, as folk like Shane Legg suggest. In that case accelerating scanning would mean primarily earlier AI, with a shift towards neuromorphic designs.
3) How much advance warning will WBE and AI give, or rather what is our probability distribution over degrees of warning? The easier a transition is to see in advance, the more likely it will be addressed by those with weak incentives and relevant skills. Possibilities with less warning, and thus less opportunity for learning, may offer higher returns on the efforts of the unusually long-term oriented.
Folk at FHI have done some work to accelerate brain emulation, e.g. with the WBE Roadmap and workshop, but there is much discussion here about estimating the risks and benefits of various interventions that would go further or try to shape future use of the technology and awareness/responses to risks.
I would love to read this talk. Do you have a blog or something?
It’s on the SIAI website, here.
It seems to me that the post offers consideration that lean one in the direction of focusing efforts on encouraging good WBE, and that considerations offered in this comment don’t much lean one back in the other direction. They mainly point to as yet unresolved uncertainties that might push us in many directions.
My main aim was to make clear the agreement about WBE being preferable to AI, and the difference between a tech being the most likely route to survival and it being the best marginal use of effort, not to put a large amount of effort into carefully giving and justifying estimates of all the relevant parameters in this comments thread rather than other venues (such as the aforementioned paper).