Smart people were once afraid that overpopulation would lead to wide scale famine. The future is hard to predict and there are many possible scenarios of how things may play out even in the scenario that AGI is unaligned. It would seem dubious to me for one to assign a 100% probability to any outcome based on just thought experiments of things that can happen in the future especially when there are so many unknowns. With so much uncertainty it seems a little bit premature to take on a full on doom frame.
Smart people were once afraid that overpopulation would lead to wide scale famine.
Yep. Concerned enough to start technical research on nitrogen fertilizer, selective breeding crops, etc. It might be fairer to put this in the “foreseen and prevented” basket, not the “nonsensical prediction of doom” basket.
Great point! Though for what it’s worth I didn’t mean to be dismissive of the prediction, my main point is that the future has not yet been determined. As you indicate people can react to predictions of the future and end up on a different course.
There’s absolutely no need to assign “100% probability to any outcome” to be worried. I wear a seatbelt because I am afraid I might one day be in a car crash despite the fact that I’ve not been in one yet. I understand there is more to your point, but I found that segment pretty objectionable and obviously irrelevant.
Smart people were once afraid that overpopulation would lead to wide scale famine.
Agreed that ‘some smart people are really worried about AGI’ is a really weak argument for worrying about AGI, on its own. If you’re going to base your concern at deference, at the very least you need a more detailed model of what competencies are at work here, and why you don’t think it’s truth-conducive to defer to smart skeptics on this topic.
The future is hard to predict and there are many possible scenarios of how things may play out even in the scenario that AGI is unaligned.
I agree with this, as stated; though I’m guessing your probability mass is much more spread out than mine, and that you mean to endorse something stronger than what I’d have in mind if I said “the future is hard to predict” or “there are many possible scenarios of how things may play out even in the scenario that AGI is unaligned”.
In particular, I think the long-term human-relevant outcomes are highly predictable if we build AGI systems and never align them: AGI systems end up steering the future to extremely low-value states, likely to optimize some simple goal that has no information content from human morality or human psychology. In that particular class of scenarios, I think there are a lot of extremely uncertain and unpredictable details (like ‘what specific goal gets optimized’ and ‘how does the AGI go about taking control’), but we aren’t equally uncertain about everything.
It would seem dubious to me for one to assign a 100% probability to any outcome
LessWrongers generally think that you shouldn’t give 100% probability to anything. When you say “100%” here, I assume you’re being hyperbolic; but I don’t know what sort of real, calibrated probability you think you’re arguing against here, so I don’t know which of 99.9%, 99%, 95%, 90%, 80%, etc. you’d include in the reasonable range of views.
based on just thought experiments of things that can happen in the future especially when there are so many unknowns. With so much uncertainty it seems a little bit premature to take on a full on doom frame.
What are your own rough probabilities, across the broad outcome categories you consider most likely?
If we were in a world where AGI is very likely to kill everyone, what present observations would you expect to have already made, that you haven’t made in real life (thus giving Bayesian evidence that AGI is less likely to kill everyone)?
What are some relatively-likely examples of future possible observations that would make you think AGI is every likely to kill everyone? Would you expect to make observations like that well in advance of AGI (if doom is in fact likely), such that we can expect to have plenty of time to prepare if we ever have to make that future update? Or do you think we’re pretty screwed, evidentially speaking, and can probably never update much toward ‘this is likely to kill us’ until it’s too late to do anything about it?
I’m still forming my views and I don’t think I’m well calibrated to state any probability with authority yet. My uncertainty still feels so high that I think my error bars would be too wide for my actual probability estimates to be useful. Some things I’m thinking about:
Forecasters are not that great at making forecasts greater than 5 years out according to Superforecasting IIRC and I don’t think AGI is going to happen within the next 5 years.
AGI has not been created yet and its possible that AI development gets derailed due to other factors e.g.:
Political and economic conditions change such that investment in AI slows down.
Global conflict exacerbates which slows down AI (maybe this speeds it up but I think there would be other pressing needs when a lot of resources has to be diverted to war)
Other global catastrophic risks could happen before AGI is developed i.e. should I be more scared of AGI than say nuclear war or GCBRs at this point (not that great but could still happen)
On the path to AGI there could be a catastrophic failure that kills a few people but can be contained but gets people really afraid of AI.
Maybe some of the work on AI safety ends up helping produce mostly aligned AI. I’m not sure if everyone dies if an AI is 90% aligned.
Maybe the AGI systems that are built don’t have instrumental convergence maybe if we get AGI through CAIS which seems to me like the most likely way we’ll get there.
Maybe like physics once the low hanging fruit has been plucked then it takes a while to make breakthroughs which extends the timelines
For me to be personally afraid I’d have to think this was the primary way I would die which seems unlikely given all the other ways I could die between now and if/when AGI is developed.
AI researchers, who are the people that most likely believe that AGI is possible more than anyone else, don’t have consensus when it comes to this issue. I know experts can be wrong about their own fields but I’d expect them to be more split on the issue(I don’t know what the current status is now just know what it was in the Grace et. al survey). I know very little about AGI, should I be more concerned than AI researchers are?
I still think it’s important to work on AI Safety since even a small chance that AGI could go wrong would still have a high expected value in terms of the negative outcome. I think most of my thinking comes from the fact that I think it is more probable that there will be a slow take off instead of a fast take off. I may also just be bad at being scared or feeling doomed.
What are some relatively-likely examples of future possible observations that would make you think AGI is every likely to kill everyone?
People start building AI that is agentic and open ended in its actions.
Would you expect to make observations like that well in advance of AGI (if doom is in fact likely), such that we can expect to have plenty of time to prepare if we ever have to make that future update?
Yes, because I think the most likely scenario is a slow take off. This is because it costs money to scale compute and we actually need to validate and the more complex a system the harder it is to build correctly, probably takes a few iterations to get things to work well enough that it can be tested against a benchmark before moving on to trying to get a system to have more capability. I think this process will have to happen many times before getting to AI that is dangerous and on the way I’d expect to start seeing some interesting agentic behavior with short-horizon planning.
Or do you think we’re pretty screwed, evidentially speaking, and can probably never update much toward ‘this is likely to kill us’ until it’s too late to do anything about it?
I think the uncertainty will be pretty high until we start seeing sophisticated agentic behavior. Though I don’t think we should wait that long to try come up with solutions since I think a small chance that this could happen still warrants concern.
Smart people were once afraid that overpopulation would lead to wide scale famine. The future is hard to predict and there are many possible scenarios of how things may play out even in the scenario that AGI is unaligned. It would seem dubious to me for one to assign a 100% probability to any outcome based on just thought experiments of things that can happen in the future especially when there are so many unknowns. With so much uncertainty it seems a little bit premature to take on a full on doom frame.
Yep. Concerned enough to start technical research on nitrogen fertilizer, selective breeding crops, etc. It might be fairer to put this in the “foreseen and prevented” basket, not the “nonsensical prediction of doom” basket.
Great point! Though for what it’s worth I didn’t mean to be dismissive of the prediction, my main point is that the future has not yet been determined. As you indicate people can react to predictions of the future and end up on a different course.
There’s absolutely no need to assign “100% probability to any outcome” to be worried. I wear a seatbelt because I am afraid I might one day be in a car crash despite the fact that I’ve not been in one yet. I understand there is more to your point, but I found that segment pretty objectionable and obviously irrelevant.
I was being hyperbolic but point taken.
Agreed that ‘some smart people are really worried about AGI’ is a really weak argument for worrying about AGI, on its own. If you’re going to base your concern at deference, at the very least you need a more detailed model of what competencies are at work here, and why you don’t think it’s truth-conducive to defer to smart skeptics on this topic.
I agree with this, as stated; though I’m guessing your probability mass is much more spread out than mine, and that you mean to endorse something stronger than what I’d have in mind if I said “the future is hard to predict” or “there are many possible scenarios of how things may play out even in the scenario that AGI is unaligned”.
In particular, I think the long-term human-relevant outcomes are highly predictable if we build AGI systems and never align them: AGI systems end up steering the future to extremely low-value states, likely to optimize some simple goal that has no information content from human morality or human psychology. In that particular class of scenarios, I think there are a lot of extremely uncertain and unpredictable details (like ‘what specific goal gets optimized’ and ‘how does the AGI go about taking control’), but we aren’t equally uncertain about everything.
LessWrongers generally think that you shouldn’t give 100% probability to anything. When you say “100%” here, I assume you’re being hyperbolic; but I don’t know what sort of real, calibrated probability you think you’re arguing against here, so I don’t know which of 99.9%, 99%, 95%, 90%, 80%, etc. you’d include in the reasonable range of views.
What are your own rough probabilities, across the broad outcome categories you consider most likely?
If we were in a world where AGI is very likely to kill everyone, what present observations would you expect to have already made, that you haven’t made in real life (thus giving Bayesian evidence that AGI is less likely to kill everyone)?
What are some relatively-likely examples of future possible observations that would make you think AGI is every likely to kill everyone? Would you expect to make observations like that well in advance of AGI (if doom is in fact likely), such that we can expect to have plenty of time to prepare if we ever have to make that future update? Or do you think we’re pretty screwed, evidentially speaking, and can probably never update much toward ‘this is likely to kill us’ until it’s too late to do anything about it?
I’m still forming my views and I don’t think I’m well calibrated to state any probability with authority yet. My uncertainty still feels so high that I think my error bars would be too wide for my actual probability estimates to be useful. Some things I’m thinking about:
Forecasters are not that great at making forecasts greater than 5 years out according to Superforecasting IIRC and I don’t think AGI is going to happen within the next 5 years.
AGI has not been created yet and its possible that AI development gets derailed due to other factors e.g.:
Political and economic conditions change such that investment in AI slows down.
Global conflict exacerbates which slows down AI (maybe this speeds it up but I think there would be other pressing needs when a lot of resources has to be diverted to war)
Other global catastrophic risks could happen before AGI is developed i.e. should I be more scared of AGI than say nuclear war or GCBRs at this point (not that great but could still happen)
On the path to AGI there could be a catastrophic failure that kills a few people but can be contained but gets people really afraid of AI.
Maybe some of the work on AI safety ends up helping produce mostly aligned AI. I’m not sure if everyone dies if an AI is 90% aligned.
Maybe the AGI systems that are built don’t have instrumental convergence maybe if we get AGI through CAIS which seems to me like the most likely way we’ll get there.
Maybe like physics once the low hanging fruit has been plucked then it takes a while to make breakthroughs which extends the timelines
For me to be personally afraid I’d have to think this was the primary way I would die which seems unlikely given all the other ways I could die between now and if/when AGI is developed.
AI researchers, who are the people that most likely believe that AGI is possible more than anyone else, don’t have consensus when it comes to this issue. I know experts can be wrong about their own fields but I’d expect them to be more split on the issue(I don’t know what the current status is now just know what it was in the Grace et. al survey). I know very little about AGI, should I be more concerned than AI researchers are?
I still think it’s important to work on AI Safety since even a small chance that AGI could go wrong would still have a high expected value in terms of the negative outcome. I think most of my thinking comes from the fact that I think it is more probable that there will be a slow take off instead of a fast take off. I may also just be bad at being scared or feeling doomed.
People start building AI that is agentic and open ended in its actions.
Yes, because I think the most likely scenario is a slow take off. This is because it costs money to scale compute and we actually need to validate and the more complex a system the harder it is to build correctly, probably takes a few iterations to get things to work well enough that it can be tested against a benchmark before moving on to trying to get a system to have more capability. I think this process will have to happen many times before getting to AI that is dangerous and on the way I’d expect to start seeing some interesting agentic behavior with short-horizon planning.
I think the uncertainty will be pretty high until we start seeing sophisticated agentic behavior. Though I don’t think we should wait that long to try come up with solutions since I think a small chance that this could happen still warrants concern.