Thanks. The nonexistence of warning shots is not in my control, but neither is the existence of a black hole headed for earth. I’m justified in acting as if there isn’t a black hole, because if there is, we’re pretty screwed anyway. I feel like maybe something similar is true (though to a lesser extent) of warning shots, but I’m not sure. If we have a 1% chance of success without warning shots and a 10% chance with warning shots, then I probably increase our overall chance of success more if I focus on warning shot scenarios.
Rudeness no problem; did I come across as arrogant or something?
I agree that that’s the major variable. And that’s what I had in mind when I said what I said: It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more. Because long-timeline worlds involve AI being made by the CCP or something. But maybe I’m wrong about that! You seem to think that long-timeline worlds involve someone like you coming up with a new paradigm, and if that’s true, then yeah maybe it’ll still happen in the Bay after all. Seems somewhat plausible to me.
Rudeness no problem; did I come across as arrogant or something?
No not at all, it’s just that the criticism was almost directly “your status is not high enough for this”. It’s like I took the underlying implication which most commonly causes offense and said it directly. It was awkward because it did not feel like you were over-reaching in terms of status, even in appearance, but you happened to be reasoning in a way which (subtly) only made sense for a version of Daniel with much more public following. So I somehow needed to convey that without the subtext which such a thing would almost always carry.
That was kind of long-winded, but this was an unusually interesting case of word-usage.
It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more.
Ah interesting. I haven’t thought much about the influence of the community as a whole (as opposed to myself); I find this plausible, though I’m definitely not convinced yet. Off the top of my head, seems like it largely depends on the extent to which the rationalist community project succeeds in the long run (even in the weak sense of individual people going their separate ways and having outsized impact) or reverts back to the mean. Note that that is itself something which you and I probably do have an outsized impact on!
When I look at the rationalist community as a bunch of people who invest heavily in experimentation and knowledge and learning about the world, that looks to me like a group which is playing the long game and should have a growing advantage over time. On the other hand, if I look at the rationalist community as a bunch of plurality-software-developers with a disproportionate chunk of AI researchers… yeah, I can see where that would look like influence on AI in the short term.
OK, cool. Well, I’m still a bit confused about why my status matters for this—it’s relative influence that matters, not absolute influence. Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I’m thinking. (Because, as you say, my influence flows through the community.)
You might be right about the long game thing. I agree that we’ll learn more and grow more in size and wealth over time. However, I think (a) the levers of the world will shift away from the USA, (b) the levers of the world will shift away from OpenAI and DeepMind and towards more distributed giant tech companies and government projects advised by prestigious academics (in other words, the usual centers of power and status will have more control over time; the current situation is an anomaly) and (c) various other things might happen that effectively impose a discount rate.
So I don’t think the two ways of looking at the rationalist community are in conflict. They are both true. It’s just that I think considerations a+b+c outweigh the improvement in knowledge, wealth, size etc. consideration.
Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I’m thinking.
Lemme sketch out a model here. We start with all the people who have influence on the direction of AI. We then break out two subgroups—US and Asia—and hypothesize that total influence of US goes down over time, and total influence of Asia goes up over time. Then we observe that you are in the US group, so this bodes poorly for your own personal influence. However, your own influence is small, which means that your contribution to the US’ total influence is small. This means your own influence can vary more-or-less independently of the US total; a delta in your influence is not large enough to significantly cause a delta in the US total influence. Now, if there was some reason to think that your influence were strongly correlated with the US total, then the US total would matter. And there are certainly things we could think of which might make that true, but “US total influence” does not seem likely to be a stronger predictor of “Daniel’s influence” than any of 50 other variables we could think of. The full pool of US AI researchers/influencers does not seem like all that great a reference class for Daniel Kokotajlo—and as long as your own influence is small relative to the total, a reference class is basically all it is.
An analogy: GDP is only very weakly correlated with my own income. If I had dramatically more wealth—like hundreds of millions or billions—then my own fortunes would probably become more tied to GDP. But as it is, using GDP to predict my income is effectively treating the whole US population as a reference class for me, and it’s not a very good reference class.
Anyway, the more interesting part...
I apparently have very different models of how the people working on AI are likely to shift over time. If everything were primarily resource-constrained, then I’d largely agree with your predictions. But even going by current trends, algorithmic/architectural improvements matter at least as much raw resources. Giant organizations—especially governments—are not good at letting lots of people try their clever ideas and then quickly integrating the successful tricks into the main product. Big organizations/governments are all about coordinating everyone around one main plan, with the plan itself subject to lots of political negotiation and compromise, and then executing that plan. That’s good for deploying lots of resources, but bad for rapid innovation.
Along similar lines, I don’t think the primary world seat of innovation is going to shift from the US to China any time soon. China has the advantage in terms of raw population, but it’s only a factor of 4 advantage; really not that dramatic a difference in the scheme of things. On the other hand, Western culture seems dramatically and unambiguously superior in terms of producing innovation, from an outside view. China just doesn’t produce breakthrough research nearly as often. 20 years ago that could easily have been attributed to less overall wealth, but that becomes less and less plausible over time—maybe I’m just not reading the right news sources, but China does not actually seem to be catching up in this regard. (That said, this is all mainly based on my own intuitions, and I could imagine data which would change my mind.)
That said, I also don’t think a US/China shift is all that relevant here either way; it’s only weakly correlated with influence of this particular community. This particular community is a relatively small share of US AI work, so a large-scale shift would be dominated by the rest of the field, and the rationalist community in particular has many channels to grow/shrink in influence independent of the US AI community. It’s essentially the same argument I made about your influence earlier, but this time applied to the community as a whole.
I do think “various other things might happen that effectively impose a discount rate” is highly relevant here. That does cut both ways, though: where there’s a discount rate, there’s a rate of return on investment, and the big question is whether rationalists have a systematic advantage in that game.
I think I mostly agree with you about innovation, but (a) I think that building AI will increasingly be more like building a bigger airport or dam, rather than like inventing something new (resources are the main constraint; ideas are not, happy to discuss this further), and (b) I think that things in the USA could deteriorate, eating away at the advantage the USA has, and (c) I think algorithmic innovations created in the USA will make their way to China in less than a year on average, through various means.
Your model of influence is interesting, and different from mine. Mine is something like: “For me to positively influence the world, I need to produce ideas which then spread through a chain of people to someone important (e.g. someone building AI, or deciding whether to deploy AI). I am separated from important people in the USA by fewer degrees of separation, and moreover the links are much stronger (e.g. my former boss lives in the same house as a top researcher at OpenAI), compared to important people in China. Moreover it’s just inherently more likely that my ideas will spread in the US network than in the Chinese network because my ideas are in English, etc. So I’m orders of magnitude more likely to have a positive effect in the USA than in China. (But, in the long run, there’ll be fewer important people in the USA, and they’ll be more degrees of separation away from me, and a greater number of poseurs will be competing for their attention, so this difference will diminish). Mine seems more intuitive/accurate to me so far.
I’d be interested to hear more about why you think resources are likely to be the main constraint, especially in light of that OpenAI report earlier this year.
Thanks. The nonexistence of warning shots is not in my control, but neither is the existence of a black hole headed for earth. I’m justified in acting as if there isn’t a black hole, because if there is, we’re pretty screwed anyway. I feel like maybe something similar is true (though to a lesser extent) of warning shots, but I’m not sure. If we have a 1% chance of success without warning shots and a 10% chance with warning shots, then I probably increase our overall chance of success more if I focus on warning shot scenarios.
Rudeness no problem; did I come across as arrogant or something?
I agree that that’s the major variable. And that’s what I had in mind when I said what I said: It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more. Because long-timeline worlds involve AI being made by the CCP or something. But maybe I’m wrong about that! You seem to think that long-timeline worlds involve someone like you coming up with a new paradigm, and if that’s true, then yeah maybe it’ll still happen in the Bay after all. Seems somewhat plausible to me.
I agree that value of information is huge.
No not at all, it’s just that the criticism was almost directly “your status is not high enough for this”. It’s like I took the underlying implication which most commonly causes offense and said it directly. It was awkward because it did not feel like you were over-reaching in terms of status, even in appearance, but you happened to be reasoning in a way which (subtly) only made sense for a version of Daniel with much more public following. So I somehow needed to convey that without the subtext which such a thing would almost always carry.
That was kind of long-winded, but this was an unusually interesting case of word-usage.
Ah interesting. I haven’t thought much about the influence of the community as a whole (as opposed to myself); I find this plausible, though I’m definitely not convinced yet. Off the top of my head, seems like it largely depends on the extent to which the rationalist community project succeeds in the long run (even in the weak sense of individual people going their separate ways and having outsized impact) or reverts back to the mean. Note that that is itself something which you and I probably do have an outsized impact on!
When I look at the rationalist community as a bunch of people who invest heavily in experimentation and knowledge and learning about the world, that looks to me like a group which is playing the long game and should have a growing advantage over time. On the other hand, if I look at the rationalist community as a bunch of plurality-software-developers with a disproportionate chunk of AI researchers… yeah, I can see where that would look like influence on AI in the short term.
OK, cool. Well, I’m still a bit confused about why my status matters for this—it’s relative influence that matters, not absolute influence. Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I’m thinking. (Because, as you say, my influence flows through the community.)
You might be right about the long game thing. I agree that we’ll learn more and grow more in size and wealth over time. However, I think (a) the levers of the world will shift away from the USA, (b) the levers of the world will shift away from OpenAI and DeepMind and towards more distributed giant tech companies and government projects advised by prestigious academics (in other words, the usual centers of power and status will have more control over time; the current situation is an anomaly) and (c) various other things might happen that effectively impose a discount rate.
So I don’t think the two ways of looking at the rationalist community are in conflict. They are both true. It’s just that I think considerations a+b+c outweigh the improvement in knowledge, wealth, size etc. consideration.
Lemme sketch out a model here. We start with all the people who have influence on the direction of AI. We then break out two subgroups—US and Asia—and hypothesize that total influence of US goes down over time, and total influence of Asia goes up over time. Then we observe that you are in the US group, so this bodes poorly for your own personal influence. However, your own influence is small, which means that your contribution to the US’ total influence is small. This means your own influence can vary more-or-less independently of the US total; a delta in your influence is not large enough to significantly cause a delta in the US total influence. Now, if there was some reason to think that your influence were strongly correlated with the US total, then the US total would matter. And there are certainly things we could think of which might make that true, but “US total influence” does not seem likely to be a stronger predictor of “Daniel’s influence” than any of 50 other variables we could think of. The full pool of US AI researchers/influencers does not seem like all that great a reference class for Daniel Kokotajlo—and as long as your own influence is small relative to the total, a reference class is basically all it is.
An analogy: GDP is only very weakly correlated with my own income. If I had dramatically more wealth—like hundreds of millions or billions—then my own fortunes would probably become more tied to GDP. But as it is, using GDP to predict my income is effectively treating the whole US population as a reference class for me, and it’s not a very good reference class.
Anyway, the more interesting part...
I apparently have very different models of how the people working on AI are likely to shift over time. If everything were primarily resource-constrained, then I’d largely agree with your predictions. But even going by current trends, algorithmic/architectural improvements matter at least as much raw resources. Giant organizations—especially governments—are not good at letting lots of people try their clever ideas and then quickly integrating the successful tricks into the main product. Big organizations/governments are all about coordinating everyone around one main plan, with the plan itself subject to lots of political negotiation and compromise, and then executing that plan. That’s good for deploying lots of resources, but bad for rapid innovation.
Along similar lines, I don’t think the primary world seat of innovation is going to shift from the US to China any time soon. China has the advantage in terms of raw population, but it’s only a factor of 4 advantage; really not that dramatic a difference in the scheme of things. On the other hand, Western culture seems dramatically and unambiguously superior in terms of producing innovation, from an outside view. China just doesn’t produce breakthrough research nearly as often. 20 years ago that could easily have been attributed to less overall wealth, but that becomes less and less plausible over time—maybe I’m just not reading the right news sources, but China does not actually seem to be catching up in this regard. (That said, this is all mainly based on my own intuitions, and I could imagine data which would change my mind.)
That said, I also don’t think a US/China shift is all that relevant here either way; it’s only weakly correlated with influence of this particular community. This particular community is a relatively small share of US AI work, so a large-scale shift would be dominated by the rest of the field, and the rationalist community in particular has many channels to grow/shrink in influence independent of the US AI community. It’s essentially the same argument I made about your influence earlier, but this time applied to the community as a whole.
I do think “various other things might happen that effectively impose a discount rate” is highly relevant here. That does cut both ways, though: where there’s a discount rate, there’s a rate of return on investment, and the big question is whether rationalists have a systematic advantage in that game.
I think I mostly agree with you about innovation, but (a) I think that building AI will increasingly be more like building a bigger airport or dam, rather than like inventing something new (resources are the main constraint; ideas are not, happy to discuss this further), and (b) I think that things in the USA could deteriorate, eating away at the advantage the USA has, and (c) I think algorithmic innovations created in the USA will make their way to China in less than a year on average, through various means.
Your model of influence is interesting, and different from mine. Mine is something like: “For me to positively influence the world, I need to produce ideas which then spread through a chain of people to someone important (e.g. someone building AI, or deciding whether to deploy AI). I am separated from important people in the USA by fewer degrees of separation, and moreover the links are much stronger (e.g. my former boss lives in the same house as a top researcher at OpenAI), compared to important people in China. Moreover it’s just inherently more likely that my ideas will spread in the US network than in the Chinese network because my ideas are in English, etc. So I’m orders of magnitude more likely to have a positive effect in the USA than in China. (But, in the long run, there’ll be fewer important people in the USA, and they’ll be more degrees of separation away from me, and a greater number of poseurs will be competing for their attention, so this difference will diminish). Mine seems more intuitive/accurate to me so far.
I’d be interested to hear more about why you think resources are likely to be the main constraint, especially in light of that OpenAI report earlier this year.
OK, sent you a PM