Thanks for the update, Daniel! How about the predictions about energy consumption?
In what year will the energy consumption of humanity or its descendants be 1000x greater than now?
Your median date for humanity’s energy consumption being 1 k times as large as now is 2031, whereas Ege’s is 2177. What is your median primary energy consumption in 2027 as reported by Our World in Data as a fraction of that in 2023? Assuming constant growth from 2023 until 2031, your median fraction would be 31.6 (= (10^3)^((2027 − 2023)/(2031 − 2023))). I would be happy to set up a bet where:
I give you 10 k€ if the fraction is higher than 31.6.
You give me 10 k€ if the fraction is lower than 31.6. I would then use the 10 k€ to support animal welfare interventions.
To be clear, my view is that we’ll achieve AGI around 2027, ASI within a year of that, and then some sort of crazy robot-powered self-replicating economy within, say, three years of that. So 1000x energy consumption around then or shortly thereafter (depends on the doubling time of the crazy superintelligence-designed-and-managed robot economy).
So, the assumption of constant growth from 2023 to 2031 is very false, at least as a representation of my view. I think my median prediction for energy consumption in 2027 is the same as yours.
To be clear, my view is that we’ll achieve AGI around 2027, ASI within a year of that, and then some sort of crazy robot-powered self-replicating economy within, say, three years of that
Is you median date of ASI as defined by Metaculus around 2028 July 1 (it would be if your time until AGI was strongly correlated with your time from AGI to ASI)? If so, I am open to a bet where:
I give you 10 k€ if ASI happens until the end of 2028 (slightly after your median, such that you have a positive expected monetary gain).
Otherwise, you give me 10 k€, which I would donate to animal welfare interventions.
That’s better, but the problem remains that I value pre-AGI money much more than I value post-AGI money, and you are offering to give me post-AGI money in exchange for my pre-AGI money (in expectation).
You could instead pay me $10k now, with the understanding that I’ll pay you $20k later in 2028 unless AGI has been achieved in which case I keep the money… but then why would I do that when I could just take out a loan for $10k at low interest rate?
I have in fact made several bets like this, totalling around $1k, with 2030 and 2027 as the due date iirc. I imagine people will come to collect from me when the time comes, if AGI hasn’t happened yet.
But it wasn’t rational for me to do that, I was just doing it to prove my seriousness.
You could instead pay me $10k now, with the understanding that I’ll pay you $20k later in 2028 unless AGI has been achieved in which case I keep the money… but then why would I do that when I could just take out a loan for $10k at low interest rate?
Have you or other people worried about AI taken such loans (e.g. to increase donations to AI safety projects)? If not, why?
Idk about others. I haven’t investigated serious ways to do this,* but I’ve taken the low-hanging fruit—it’s why my family hasn’t paid off our student loan debt for example, and it’s why I went for financing on my car (with as long a payoff time as possible) instead of just buying it with cash.
*Basically I’d need to push through my ugh field and go do research on how to make this happen. If someone offered me a $10k low-interest loan on a silver platter I’d take it.
You could instead pay me $10k now, with the understanding that I’ll pay you $20k later in 2028 unless AGI has been achieved in which case I keep the money… but then why would I do that when I could just take out a loan for $10k at low interest rate?
We could set up the bet such that it would involve you losing/gaining no money in expectation under your views, whereas you would lose money in expectation with a loan? Also, note the bet I proposed above was about ASI as defined by Metaculus, not AGI.
I gain money in expectation with loans, because I don’t expect to have to pay them back.
I see. I was implicitly assuming a nearterm loan or one with an interest rate linked to economic growth, but you might be able to get a longterm loan with a fixed interest rate.
What specific bet are you offering?
I transfer 10 k today-€ to you now, and you transfer 20 k today-€ to me if there is no ASI as defined by Metaculus on date X, which has to be sufficiently far away for the bet to be better than your best loan. X could be 12.0 years (= LN(0.9*20*10^3/(10*10^3))/LN(1 + 0.050)) from now assuming a 90 % chance I win the bet, and an annual growth of my investment of 5.0 %. However, if the cost-effectiveness of my donations also decreases 5 %, then I can only go as far as 6.00 years (= 12.0/2).
I also guess the stock market will grow faster than suggested by historical data, so I would only want to have X roughly as far as in 2028. So, at the end of the day, it looks like you are right that you would be better off getting a loan.
But it wasn’t rational for me to do that, I was just doing it to prove my seriousness.
My offer was also in this spirit of you proving your seriousness. Feel free to suggest bets which would be rational for you to take. Do you think there is a significant risk of a large AI catastrophe in the next few years? For example, what do you think is the probability of human population decreasing from (mid) 2026 to (mid) 2027?
You are basically asking me to give up money in expectation to prove that I really believe what I’m saying, when I’ve already done literally this multiple times. (And besides, hopefully it’s pretty clear that I am serious from my other actions.) So, I’m leaning against doing this, sorry. If you have an idea for a bet that’s net-positive for me I’m all ears.
Yes I do think there’s a significant risk of large AI catastrophe in the next few years. To answer your specific question, maybe something like 5%? idk.
Good question. I guess I’m at 30%, so 2x higher? Low confidence haven’t thought about it much, there’s a lot of uncertainty about what METR/ARC will classify as success, and I also haven’t reread ARC/METR’s ARA eval to remind myself of how hard it is.
Have your probabilities for AGI on given years changed at all since this breakdown you gave 7 months ago? I, and I’m sure many others, defer quite a lot to your views on timelines, so it would be good to have an updated breakdown.
I am not Daniel, but why would “constant growth” make any sense under Daniel’s worldview? The whole point is that AI can achieve explosive growth, and right now energy consumption growth is determined by human growth, not AI growth, so it seems extremely unlikely for growth between now and then to be constant.
Daniel almost surely doesn’t think growth will be constant. (Presumably he has a model similar to the one here.) I assume he also thinks that by the time energy production is >10x higher, the world has generally been radically transformed by AI.
Daniel almost surely doesn’t think growth will be constant. (Presumably he has a model similar to the one here.)
That makes senes. Daniel, my terms are flexible. Just let me know what is your median fraction for 2027, and we can go from there.
I assume he also thinks that by the time energy production is >10x higher, the world has generally been radically transformed by AI.
Right. I think the bet is roughly neutral with respect to monetary gains under Daniel’s view, but Daniel may want to go ahead despite that to show that he really endorses his views. Not taking the bet may suggest Daniel is worried about losing 10 k€ in a world where 10 k€ is still relevant.
I’m not sure I understand. You and I, as far as I know, have the same beliefs about world energy consumption in 2027, at least on our median timelines. I think it could be higher, but only if AGI timelines are a lot shorter than I think and takeoff is a lot faster than I think. And in those worlds we probably won’t be around to resolve the bet in 2027, nor would I care much about winning that bet anyway. (Money post-singularity will be much less valuable to me than money before the singularity)
Thanks for the update, Daniel! How about the predictions about energy consumption?
Your median date for humanity’s energy consumption being 1 k times as large as now is 2031, whereas Ege’s is 2177. What is your median primary energy consumption in 2027 as reported by Our World in Data as a fraction of that in 2023? Assuming constant growth from 2023 until 2031, your median fraction would be 31.6 (= (10^3)^((2027 − 2023)/(2031 − 2023))). I would be happy to set up a bet where:
I give you 10 k€ if the fraction is higher than 31.6.
You give me 10 k€ if the fraction is lower than 31.6. I would then use the 10 k€ to support animal welfare interventions.
To be clear, my view is that we’ll achieve AGI around 2027, ASI within a year of that, and then some sort of crazy robot-powered self-replicating economy within, say, three years of that. So 1000x energy consumption around then or shortly thereafter (depends on the doubling time of the crazy superintelligence-designed-and-managed robot economy).
So, the assumption of constant growth from 2023 to 2031 is very false, at least as a representation of my view. I think my median prediction for energy consumption in 2027 is the same as yours.
Thanks, Daniel!
Is you median date of ASI as defined by Metaculus around 2028 July 1 (it would be if your time until AGI was strongly correlated with your time from AGI to ASI)? If so, I am open to a bet where:
I give you 10 k€ if ASI happens until the end of 2028 (slightly after your median, such that you have a positive expected monetary gain).
Otherwise, you give me 10 k€, which I would donate to animal welfare interventions.
That’s better, but the problem remains that I value pre-AGI money much more than I value post-AGI money, and you are offering to give me post-AGI money in exchange for my pre-AGI money (in expectation).
You could instead pay me $10k now, with the understanding that I’ll pay you $20k later in 2028 unless AGI has been achieved in which case I keep the money… but then why would I do that when I could just take out a loan for $10k at low interest rate?
I have in fact made several bets like this, totalling around $1k, with 2030 and 2027 as the due date iirc. I imagine people will come to collect from me when the time comes, if AGI hasn’t happened yet.
But it wasn’t rational for me to do that, I was just doing it to prove my seriousness.
Have you or other people worried about AI taken such loans (e.g. to increase donations to AI safety projects)? If not, why?
Idk about others. I haven’t investigated serious ways to do this,* but I’ve taken the low-hanging fruit—it’s why my family hasn’t paid off our student loan debt for example, and it’s why I went for financing on my car (with as long a payoff time as possible) instead of just buying it with cash.
*Basically I’d need to push through my ugh field and go do research on how to make this happen. If someone offered me a $10k low-interest loan on a silver platter I’d take it.
We could set up the bet such that it would involve you losing/gaining no money in expectation under your views, whereas you would lose money in expectation with a loan? Also, note the bet I proposed above was about ASI as defined by Metaculus, not AGI.
I gain money in expectation with loans, because I don’t expect to have to pay them back. What specific bet are you offering?
I see. I was implicitly assuming a nearterm loan or one with an interest rate linked to economic growth, but you might be able to get a longterm loan with a fixed interest rate.
I transfer 10 k today-€ to you now, and you transfer 20 k today-€ to me if there is no ASI as defined by Metaculus on date X, which has to be sufficiently far away for the bet to be better than your best loan. X could be 12.0 years (= LN(0.9*20*10^3/(10*10^3))/LN(1 + 0.050)) from now assuming a 90 % chance I win the bet, and an annual growth of my investment of 5.0 %. However, if the cost-effectiveness of my donations also decreases 5 %, then I can only go as far as 6.00 years (= 12.0/2).
I also guess the stock market will grow faster than suggested by historical data, so I would only want to have X roughly as far as in 2028. So, at the end of the day, it looks like you are right that you would be better off getting a loan.
Thanks for doing the math on this and changing your mind! <3
Thanks, Daniel. That makes sense.
My offer was also in this spirit of you proving your seriousness. Feel free to suggest bets which would be rational for you to take. Do you think there is a significant risk of a large AI catastrophe in the next few years? For example, what do you think is the probability of human population decreasing from (mid) 2026 to (mid) 2027?
You are basically asking me to give up money in expectation to prove that I really believe what I’m saying, when I’ve already done literally this multiple times. (And besides, hopefully it’s pretty clear that I am serious from my other actions.) So, I’m leaning against doing this, sorry. If you have an idea for a bet that’s net-positive for me I’m all ears.
Yes I do think there’s a significant risk of large AI catastrophe in the next few years. To answer your specific question, maybe something like 5%? idk.
Are you much higher than Metaculus’ community on Will ARC find that GPT-5 has autonomous replication capabilities??
Good question. I guess I’m at 30%, so 2x higher? Low confidence haven’t thought about it much, there’s a lot of uncertainty about what METR/ARC will classify as success, and I also haven’t reread ARC/METR’s ARA eval to remind myself of how hard it is.
Have your probabilities for AGI on given years changed at all since this breakdown you gave 7 months ago? I, and I’m sure many others, defer quite a lot to your views on timelines, so it would be good to have an updated breakdown.
15% − 2024
15% − 2025
15% − 2026
10% − 2027
5% − 2028
5% − 2029
3% − 2030
2% − 2031
2% − 2032
2% − 2033
2% − 2034
2% − 2035
My 2024 probability has gone down from 15% to 5%. Other than that things are pretty similar, so just renormalize I guess.
I am not Daniel, but why would “constant growth” make any sense under Daniel’s worldview? The whole point is that AI can achieve explosive growth, and right now energy consumption growth is determined by human growth, not AI growth, so it seems extremely unlikely for growth between now and then to be constant.
Daniel almost surely doesn’t think growth will be constant. (Presumably he has a model similar to the one here.) I assume he also thinks that by the time energy production is >10x higher, the world has generally been radically transformed by AI.
Thanks, Ryan.
That makes senes. Daniel, my terms are flexible. Just let me know what is your median fraction for 2027, and we can go from there.
Right. I think the bet is roughly neutral with respect to monetary gains under Daniel’s view, but Daniel may want to go ahead despite that to show that he really endorses his views. Not taking the bet may suggest Daniel is worried about losing 10 k€ in a world where 10 k€ is still relevant.
I’m not sure I understand. You and I, as far as I know, have the same beliefs about world energy consumption in 2027, at least on our median timelines. I think it could be higher, but only if AGI timelines are a lot shorter than I think and takeoff is a lot faster than I think. And in those worlds we probably won’t be around to resolve the bet in 2027, nor would I care much about winning that bet anyway. (Money post-singularity will be much less valuable to me than money before the singularity)