Selective opinion and answers (for longer discussions, respond to specific points and I’ll furnish more details):
Which kinds of differential technological development should we encourage, and how?
I recommend pushing for whole brain emulations, with scanning-first and emphasis on fully uploading actual humans. Also, military development of AI should be prioritised over commercial and academic development, if possible.
Which open problems are safe to discuss, and which are potentially dangerous?
Seeing what has already been published, I see little advantage to restricting discussion of most open problems.
What can we do to reduce the risk of an AI arms race?
Any methods that would reduce traditional arms races. Cross ownership of stocks in commercial companies. Investment funds with specific AI disclosure requirements. Rewards for publishing interim results.
What can we do to raise the “sanity waterline,” and how much will this help?
Individual sanity waterline raising among researchers useful, but generally we want to raise the sanity waterline of institutions, which is harder but more important (and may have nothing to do with improving individuals).
Which interventions should we prioritize?
We need a solid push to see if reduced impact or Oracle AIs can work, and we need to make the academic and business worlds to take the risks more seriously. Interventions to stop the construction of dangerous AIs unlikely to succeed, but “working with your company to make your AIs safer (and offering useful advice along the way)” could work. We need to develop useful tools we can offer others, not solely nagging them all the time.
How should x-risk reducers and AI safety researchers interact with governments and corporations?
Beggars can’t be choosers. For the moment, we need to make them take it seriously, convince them, and give away any safety-increasing info we might have. Later we may have to pursue different courses.
How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?
Funding SIAI and FHI and similar, getting us in contact with policy makers, raising the respectability of xrisks.
How does AI risk compare to other existential risks?
Very different; no other xrisk has such uncertain probabilities and timelines, and such huge risks and rewards and various scenarios that can play out.
Which problems do we need to solve, and which ones can we have an AI solve?
We need to survive till AI, and survive AI. If we survive, most trends are positive, so don’t need to worry about much else.
How can we develop microeconomic models of WBEs and self-improving systems?
With thought and research :-)
How can we be sure a Friendly AI development team will be altruistic?
Do it ourselves, normalise altruistic behaviour in the field, or make it in their self-interest to be altruistic.
How hard is it to create Friendly AI?
Probably extraordinarily hard if the FAI is as intelligent as we fear. More work needs to be done to explore partial solutions (limited impact, Oracle, etc...)
Is there a safe way to do uploads, where they don’t turn into neuromorphic AI?
Keep them as human (in their interactions, in their virtual realities, in their identities etc...) as possible.
How possible is it to do FAI research on a seastead?
How is this relevant? If governments were so concerned about AI potential that the location of the research became important, then we would have made tremendous progress in getting people to take it seriously, and AI will most likely not be developed by a small seasteading independent group.
How much must we spend on security when developing a Friendly AI team?
We’ll see at the time.
What’s the best way to recruit talent toward working on AI risks?
General: get people involved as a problem to be worked on, socialise them into our world, get them to care. AI researchers: conferences and publications and getting more respectable publicity.
How difficult is stabilizing the world so we can work on Friendly AI slowly?
Very.
How hard will a takeoff be?
Little useful data. Use scenario planning rather than probability estimates.
What is the value of strategy vs. object-level progress toward a positive Singularity?
Both needed, both need to be closely connected, easy shifts from one to the other. Possibly should be more strategy at the current time.
How feasible is Oracle AI?
As yet unknown. Research progressing, based on past performance I expect new insights to arrive.
Can we convert environmentalists into people concerned with existential risk?
With difficulty for AI risks, with ease for some others (extreme global warming). Would this be useful? Smaller more tightly focused pressure groups would preform much better, even if less influence.
Is there no such thing as bad publicity [for AI risk reduction] purposes?
Anything that makes it seem more like an area for cranks is bad publicity.
What are your most important disagreements with other FHI/SIAI people? How do you account for these disagreements?
You say:
I recommend pushing for whole brain emulations
but also:
We need a solid push to see if reduced impact or Oracle AIs can work
which makes me a bit confused. Are you saying we should push them simultaneously, or what? Also, what path do you see from a successful Oracle AI to a positive Singularity? For example, use Oracle AI to develop WBE technology, then use WBEs to create FAI? Or something else?
What are your most important disagreements with other FHI/SIAI people? How do you account for these disagreements?
Main disagreement with FHI people is that I’m more worried about AI than they are (I’m probably up with the SIAI folks on this). I suspect an anchoring effect here—I was drawn to the FHI’s work through AI risk, others were drawn in through other angles (also I spend much more time on Less Wrong, making AI risks very salient). Not sure what this means for accuracy, so my considered opinion is that AI is less risky than I individually believe.
Are you saying we should push them simultaneously, or what?
My main disagreement with SIAI is that I think FAI is unlikely to be implementable on time. So I want to explore alternative avenues, several ones ideally. Oracle to FAI would be one route; Oracle to people taking AI seriously to FAI might be another. WBE opens up many other avenues (including “no AI”), so is also worth looking into.
I haven’t bothered to try and close the gap between me and SIAI on this, because even if they are correct, I think it’s valuable for the group to have someone looking into non-FAI avenues.
Thanks for the answers. The main problem I have with Oracle AI is that it seems a short step from OAI to UFAI, but a long path to FAI (since you still need to solve ethics and it’s hard to see how OAI helps with that), so it seems dangerous to push for it, unless you do it in secret and can keep it secret. Do you agree? If so, I’m not sure how “Oracle to people taking AI seriously to FAI” is supposed to work.
My main “pressure point” is pushing UFAI development towards OAI. ie I don’t advocate building OAI, but making sure that the first AGIs will be OAIs. And I’m using far too many acronyms.
What does it matter that the first AGIs will be OAIs, if UFAIs follow immediately after? I mean, once knowledge of how to build OAIs start to spread, how are you going to make sure that nobody fails to properly contain their Oracles, or intentionally modifies them into AGIs that act on their own initiatives? (This recent post of mine might better explain where I’m coming from, if you haven’t already read it.)
We can already think productively about how to win if oracle AIs come first. Paul Christiano is working on this right now, see the “formal instructions” posts on his blog. Things are still vague but I think we have a viable attack here.
Of course the model “OAIs are extremely dangerous if not properly contained; let’s let everyone have one!” isn’t going to work. But there are many things we can try with an OAI (building a FAI, for instance), and most importantly, some of these things will be experimental (the FAI approach relies on getting the theory right, with no opportunity to test it). And there is a window that doesn’t exist with a genie—a window where people realise superintelligence is possible and where we might be able to get them to take safety seriously (and they’re not all dead). We might also be able to get exotica like a limited impact AI or something like that, if we can find safe ways of experimenting with OAIs.
And there seems no drawback to pushing an UFAI project into becoming an OAI project.
Cousin_it’s link is interesting, but it doesn’t seem to have anything to do with OAI, and instead looks like a possible method of directly building an FAI.
Of course the model “OAIs are extremely dangerous if not properly contained; let’s let everyone have one!” isn’t going to work.
Hmm, maybe I’m underestimating the amount of time it would take for OAI knowledge to spread, especially if the first OAI project is a military one (on the other hand, the military and their contractors don’t seem to be having better luck with network security than anyone else). How long do you expect the window of opportunity (i.e., the time from the first successful OAI to the first UFAI, assuming no FAI gets built in the mean time) to be?
some of these things will be experimental
I’d like to have FAI researchers determine what kind of experiments they want to do (if any, after doing appropriate benefit/risk analysis), which probably depends on the specific FAI approach they intend to use, and then build limited AIs (or non-AI constructs) to do the experiments. Building general Oracles that can answer arbitrary (or a wide range of) questions seems unnecessarily dangerous for this purpose, and may not help anyway depending on the FAI approach.
And there seems no drawback to pushing an UFAI project into becoming an OAI project.
There may be, if the right thing to do is to instead push them to not build an AGI at all.
One important fact I haven’t been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.
There may be, if the right thing to do is to instead push them to not build an AGI at all.
Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.
slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous
Moved where/how? Slow takeoff means we have more time, but I don’t see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.
Together with Wei’s point that OAI doesn’t seem to help much, there is the downside that existence of OAI safety guidelines might make it harder to argue against pushing AGI in general. So on net it’s plausible that this might be a bad idea, which argues for weighing this tradeoff more carefully.
It’s useful for AGI researchers to notice that there are safety issues, but not useful for them to notice that there are “safety issues” which can be dealt with by following OAI guidelines. The latter kind of understanding might be worse than none at all, as it seemingly resolves the problem. So it’s not clear to me that getting people to “admit that there might be safety issues” is in itself a worthwhile milestone.
My main disagreement with SIAI is that I think FAI is unlikely to be implementable on time.
Why do you say this is a disagreement? Who at SIAI thinks FAI is likely to be implementable on time (and why)?
So I want to explore alternative avenues, several ones ideally.
Right, assuming we can find any alternative avenues of comparable probability of success. I think it’s unlikely for FAI to be implementable both “on time” (i.e. by humans in current society), and via alternative avenues (of which fast WBE humans seems the most plausible one, which argues for late WBE that’s not hardware-limited, not pushing it now). This makes current research as valuable as alternative routes despite improbability of current research’s success.
Why do you say this is a disagreement? Who at SIAI thinks FAI is likely to be implementable on time (and why)?
Let me rephrase: I think the expected gain from pursuing FAI is less that pursuing other methods. Other methods are less likely to work, but more likely to be implementable. I think SIAI disagrees with this accessment.
I think the expected gain from pursuing FAI is less that pursuing other methods. Other methods are less likely to work, but more likely to be implementable.
I assume that by “implementable” you mean that it’s an actionable project, that might fail to “work”, i.e. deliver the intended result. I don’t see how “implementability” is a relevant characteristic. What matters is whether something works, i.e. succeeds. If you think that other methods are less likely to work, how are they of greater expected value? I probably parsed some of your terms incorrectly.
Whether the project reached the desired goal, versus whether that goal will actually work. If Nick and Eliezer both agreed about some design that “this is how you build a FAI”, then I expect it will work. However, I don’t think it’s likely that would happen. It’s more likely they will say “this is how you build a proper Oracle AI”, but less likely the Oracle will end up being safe.
Whether the project reached the desired goal, versus whether that goal will actually work.
Okay, but I still don’t understand how a project with lower probability of “actually working” can be of higher expected value. I’m referring to this statement:
I think the expected gain from pursuing FAI is less that pursuing other methods. Other methods are less likely to work...
The argument you seem to be giving in support of higher expected value of other methods is that they are “more likely to be implementable” (a project reaching its stated goal, even if that goal turns out to be no good), but I don’t see how is that an interesting property.
He didn’t say other architectures would be no good, he said they’re less likely to be safe.
He thinks the distribution P(Outcome | do(complete Oracle AI project)) isn’t as highly peaked at Weirdtopia as P(outcome | do(complete FAI)); Oracle AI puts more weight on regions like “Lifeless universe”, “Eternal Torture”, “Rainbows and Slow Death”, and “Failed Utopia”.
However, “Complete FAI” isn’t an actionable procedure, so he examines the chance of completion conditional on different actions he can take. “Not worth pursuing because non-implementable” means that available FAI supporting actions don’t have a reasonable chance of producing friendly AI, which discounts the peak in the conditional outcome distribution at valuable futures relative to do(complete FAI). And supposedly he has some other available oracle AI supporting strategy which fares better.
Eating a sandwich isn’t as cool as building an interstellar society with wormholes for transportation, but I’m still going to make a sandwich for lunch, because it’s going to work and maybe be okay-ish.
Main disagreement with FHI people is that I’m more worried about AI than they are (I’m probably up with the SIAI folks on this).
Where can we read FHI’s analysis of AI risk? Why are they not as worried as you and SIAI people? Has there ever been a debate between FHI and SIAI on this? What threats are they most worried about? What technologies do they want to push or slow down?
AI is high on the list—one of the top risks, even if their objective assessment is lower than SIAI. Nuclear war, synthetic biology, nanotech, pandemics, social collapse: these are the other ones we’re looking it.
Basically they don’t buy the “AI inevitably goes foom and inevitably takes over”. They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate a variety of of conditional statements (“AI possible this century”, “if AI then FOOM”, “if FOOM then DOOM”, etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can’t find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn’t say they thought other things were more worrying; just that AI wasn’t the single overwhelming risk/reward factor that SIAI (and me) believe it to be.
A wild guess. FHI believes that the best what can reasonably be done about existential risks at this point in time is to do research into existential risks, including possible unknown unknowns, and into strategies to reduce current existential risks. This somewhat agrees with their FAQ:
Research into existential risk and analysis of potential countermeasures is a very strong candidate for being the currently most cost-effective way to reduce existential risk. This includes research into some methodological problems and into certain strategic questions that pertain to existential risk. Similarly, actions that contribute indirectly to producing more high-quality analysis on existential risk and a capacity later to act on the result of such analysis could also be extremely cost-effective. This includes, for example, donating money to existential risk research, supporting organizations and networks that engage in fundraising for existential risks work, and promoting wider awareness of the topic and its importance.
In other words, FHI seems to focus on meta issues, existential risks in general, rather than associated specifics.
Selective opinion and answers (for longer discussions, respond to specific points and I’ll furnish more details):
I recommend pushing for whole brain emulations, with scanning-first and emphasis on fully uploading actual humans. Also, military development of AI should be prioritised over commercial and academic development, if possible.
Seeing what has already been published, I see little advantage to restricting discussion of most open problems.
Any methods that would reduce traditional arms races. Cross ownership of stocks in commercial companies. Investment funds with specific AI disclosure requirements. Rewards for publishing interim results.
Individual sanity waterline raising among researchers useful, but generally we want to raise the sanity waterline of institutions, which is harder but more important (and may have nothing to do with improving individuals).
We need a solid push to see if reduced impact or Oracle AIs can work, and we need to make the academic and business worlds to take the risks more seriously. Interventions to stop the construction of dangerous AIs unlikely to succeed, but “working with your company to make your AIs safer (and offering useful advice along the way)” could work. We need to develop useful tools we can offer others, not solely nagging them all the time.
Beggars can’t be choosers. For the moment, we need to make them take it seriously, convince them, and give away any safety-increasing info we might have. Later we may have to pursue different courses.
Funding SIAI and FHI and similar, getting us in contact with policy makers, raising the respectability of xrisks.
Very different; no other xrisk has such uncertain probabilities and timelines, and such huge risks and rewards and various scenarios that can play out.
We need to survive till AI, and survive AI. If we survive, most trends are positive, so don’t need to worry about much else.
With thought and research :-)
Do it ourselves, normalise altruistic behaviour in the field, or make it in their self-interest to be altruistic.
Probably extraordinarily hard if the FAI is as intelligent as we fear. More work needs to be done to explore partial solutions (limited impact, Oracle, etc...)
Keep them as human (in their interactions, in their virtual realities, in their identities etc...) as possible.
How is this relevant? If governments were so concerned about AI potential that the location of the research became important, then we would have made tremendous progress in getting people to take it seriously, and AI will most likely not be developed by a small seasteading independent group.
We’ll see at the time.
General: get people involved as a problem to be worked on, socialise them into our world, get them to care. AI researchers: conferences and publications and getting more respectable publicity.
Very.
Little useful data. Use scenario planning rather than probability estimates.
Both needed, both need to be closely connected, easy shifts from one to the other. Possibly should be more strategy at the current time.
As yet unknown. Research progressing, based on past performance I expect new insights to arrive.
With difficulty for AI risks, with ease for some others (extreme global warming). Would this be useful? Smaller more tightly focused pressure groups would preform much better, even if less influence.
Anything that makes it seem more like an area for cranks is bad publicity.
What are your most important disagreements with other FHI/SIAI people? How do you account for these disagreements?
You say:
but also:
which makes me a bit confused. Are you saying we should push them simultaneously, or what? Also, what path do you see from a successful Oracle AI to a positive Singularity? For example, use Oracle AI to develop WBE technology, then use WBEs to create FAI? Or something else?
Main disagreement with FHI people is that I’m more worried about AI than they are (I’m probably up with the SIAI folks on this). I suspect an anchoring effect here—I was drawn to the FHI’s work through AI risk, others were drawn in through other angles (also I spend much more time on Less Wrong, making AI risks very salient). Not sure what this means for accuracy, so my considered opinion is that AI is less risky than I individually believe.
My main disagreement with SIAI is that I think FAI is unlikely to be implementable on time. So I want to explore alternative avenues, several ones ideally. Oracle to FAI would be one route; Oracle to people taking AI seriously to FAI might be another. WBE opens up many other avenues (including “no AI”), so is also worth looking into.
I haven’t bothered to try and close the gap between me and SIAI on this, because even if they are correct, I think it’s valuable for the group to have someone looking into non-FAI avenues.
Thanks for the answers. The main problem I have with Oracle AI is that it seems a short step from OAI to UFAI, but a long path to FAI (since you still need to solve ethics and it’s hard to see how OAI helps with that), so it seems dangerous to push for it, unless you do it in secret and can keep it secret. Do you agree? If so, I’m not sure how “Oracle to people taking AI seriously to FAI” is supposed to work.
My main “pressure point” is pushing UFAI development towards OAI. ie I don’t advocate building OAI, but making sure that the first AGIs will be OAIs. And I’m using far too many acronyms.
What does it matter that the first AGIs will be OAIs, if UFAIs follow immediately after? I mean, once knowledge of how to build OAIs start to spread, how are you going to make sure that nobody fails to properly contain their Oracles, or intentionally modifies them into AGIs that act on their own initiatives? (This recent post of mine might better explain where I’m coming from, if you haven’t already read it.)
We can already think productively about how to win if oracle AIs come first. Paul Christiano is working on this right now, see the “formal instructions” posts on his blog. Things are still vague but I think we have a viable attack here.
Wot cousin_it said.
Of course the model “OAIs are extremely dangerous if not properly contained; let’s let everyone have one!” isn’t going to work. But there are many things we can try with an OAI (building a FAI, for instance), and most importantly, some of these things will be experimental (the FAI approach relies on getting the theory right, with no opportunity to test it). And there is a window that doesn’t exist with a genie—a window where people realise superintelligence is possible and where we might be able to get them to take safety seriously (and they’re not all dead). We might also be able to get exotica like a limited impact AI or something like that, if we can find safe ways of experimenting with OAIs.
And there seems no drawback to pushing an UFAI project into becoming an OAI project.
Cousin_it’s link is interesting, but it doesn’t seem to have anything to do with OAI, and instead looks like a possible method of directly building an FAI.
Hmm, maybe I’m underestimating the amount of time it would take for OAI knowledge to spread, especially if the first OAI project is a military one (on the other hand, the military and their contractors don’t seem to be having better luck with network security than anyone else). How long do you expect the window of opportunity (i.e., the time from the first successful OAI to the first UFAI, assuming no FAI gets built in the mean time) to be?
I’d like to have FAI researchers determine what kind of experiments they want to do (if any, after doing appropriate benefit/risk analysis), which probably depends on the specific FAI approach they intend to use, and then build limited AIs (or non-AI constructs) to do the experiments. Building general Oracles that can answer arbitrary (or a wide range of) questions seems unnecessarily dangerous for this purpose, and may not help anyway depending on the FAI approach.
There may be, if the right thing to do is to instead push them to not build an AGI at all.
One important fact I haven’t been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.
Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.
Moved where/how? Slow takeoff means we have more time, but I don’t see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.
Together with Wei’s point that OAI doesn’t seem to help much, there is the downside that existence of OAI safety guidelines might make it harder to argue against pushing AGI in general. So on net it’s plausible that this might be a bad idea, which argues for weighing this tradeoff more carefully.
Possibly. But in my experience even getting the AGI people to admit that there might be safety issues is over 90% of the battle.
It’s useful for AGI researchers to notice that there are safety issues, but not useful for them to notice that there are “safety issues” which can be dealt with by following OAI guidelines. The latter kind of understanding might be worse than none at all, as it seemingly resolves the problem. So it’s not clear to me that getting people to “admit that there might be safety issues” is in itself a worthwhile milestone.
Why do you say this is a disagreement? Who at SIAI thinks FAI is likely to be implementable on time (and why)?
Right, assuming we can find any alternative avenues of comparable probability of success. I think it’s unlikely for FAI to be implementable both “on time” (i.e. by humans in current society), and via alternative avenues (of which fast WBE humans seems the most plausible one, which argues for late WBE that’s not hardware-limited, not pushing it now). This makes current research as valuable as alternative routes despite improbability of current research’s success.
Let me rephrase: I think the expected gain from pursuing FAI is less that pursuing other methods. Other methods are less likely to work, but more likely to be implementable. I think SIAI disagrees with this accessment.
I assume that by “implementable” you mean that it’s an actionable project, that might fail to “work”, i.e. deliver the intended result. I don’t see how “implementability” is a relevant characteristic. What matters is whether something works, i.e. succeeds. If you think that other methods are less likely to work, how are they of greater expected value? I probably parsed some of your terms incorrectly.
Whether the project reached the desired goal, versus whether that goal will actually work. If Nick and Eliezer both agreed about some design that “this is how you build a FAI”, then I expect it will work. However, I don’t think it’s likely that would happen. It’s more likely they will say “this is how you build a proper Oracle AI”, but less likely the Oracle will end up being safe.
Okay, but I still don’t understand how a project with lower probability of “actually working” can be of higher expected value. I’m referring to this statement:
The argument you seem to be giving in support of higher expected value of other methods is that they are “more likely to be implementable” (a project reaching its stated goal, even if that goal turns out to be no good), but I don’t see how is that an interesting property.
He didn’t say other architectures would be no good, he said they’re less likely to be safe.
He thinks the distribution P(Outcome | do(complete Oracle AI project)) isn’t as highly peaked at Weirdtopia as P(outcome | do(complete FAI)); Oracle AI puts more weight on regions like “Lifeless universe”, “Eternal Torture”, “Rainbows and Slow Death”, and “Failed Utopia”.
However, “Complete FAI” isn’t an actionable procedure, so he examines the chance of completion conditional on different actions he can take. “Not worth pursuing because non-implementable” means that available FAI supporting actions don’t have a reasonable chance of producing friendly AI, which discounts the peak in the conditional outcome distribution at valuable futures relative to do(complete FAI). And supposedly he has some other available oracle AI supporting strategy which fares better.
Eating a sandwich isn’t as cool as building an interstellar society with wormholes for transportation, but I’m still going to make a sandwich for lunch, because it’s going to work and maybe be okay-ish.
What do you mean to be the distinction between these?
Where can we read FHI’s analysis of AI risk? Why are they not as worried as you and SIAI people? Has there ever been a debate between FHI and SIAI on this? What threats are they most worried about? What technologies do they want to push or slow down?
AI is high on the list—one of the top risks, even if their objective assessment is lower than SIAI. Nuclear war, synthetic biology, nanotech, pandemics, social collapse: these are the other ones we’re looking it.
Basically they don’t buy the “AI inevitably goes foom and inevitably takes over”. They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate it at 50%???
And there are other things they are more concerned about?
What are those other things?
They estimate a variety of of conditional statements (“AI possible this century”, “if AI then FOOM”, “if FOOM then DOOM”, etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can’t find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn’t say they thought other things were more worrying; just that AI wasn’t the single overwhelming risk/reward factor that SIAI (and me) believe it to be.
A wild guess. FHI believes that the best what can reasonably be done about existential risks at this point in time is to do research into existential risks, including possible unknown unknowns, and into strategies to reduce current existential risks. This somewhat agrees with their FAQ:
In other words, FHI seems to focus on meta issues, existential risks in general, rather than associated specifics.