Some thoughts that came to me after I wrote this post:
--I’m not sure I should define date-competitive the way I do. Maybe instead of “can be built” it should be “is built.” If we go the latter route, the FOOM scenario is an extremely intense date competition. If we go the former route, the FOOM scenario is not necessarily an intense date competition; it depends on what other factors are at play. For example, maybe there are only a few major AI projects and all of them are pretty socially responsible, so a design is more likely to win if it can be built sooner, but it won’t necessarily win; maybe cooler heads will prevail and build a safer design instead.
--Why is date-competitiveness worth calling a kind of competitiveness at all? Why not just say: “We want our AI safety scheme/design to be cost- and performance-competitive, and also we need to be able to build it fairly quickly compared to the other stuff that gets built.” Well, 1. Even that is clunky and awkward compared to the elegant ”...and also date-competitive.” 2. It really does have the comparative flavor of competition to it; what matters is not how long it takes us to complete our safety scheme, but how long it takes relative to unaligned schemes, and it’s not as simple as just “we need to be first,” rather it’s that sooner is better but doing it later isn’t necessarily game over… 3. It seems to be useful for describing date competitions, which are important to distinguish from situations which are not date competitions or less so. (Aside: A classic criticism of the “Let’s build uploads first, and upload people we trust” strategy is that neuromorphic AI will probably come before uploads. In other words, this strategy is not date-competitive.)
--I’m toying with the idea of adding “alignment-competitiveness” (meaning, as aligned or more aligned than competing systems) and “alignment competition” to the set of definitions. This sounds silly, but it would be conceptually neat, because then we can say: We hope for scenarios in which control of the future is a very intense alignment competition, and we are working hard to make it that way. ”
I’m ambivalent about adding “alignment competitiveness”, since for me competitiveness in the context of AI safety is about asking whether aligned approaches can compete (for different meaning of competing) with unaligned ones.
I had been thinking it is sometimes nice to talk about competitiveness of AI designs more generally, not just alignment schemes. e.g. neuromorphic AI is more date-competitive, cost-competitive, and performance-competitive than uploads, probably. (It might be less date-competitive though).
Some thoughts that came to me after I wrote this post:
--I’m not sure I should define date-competitive the way I do. Maybe instead of “can be built” it should be “is built.” If we go the latter route, the FOOM scenario is an extremely intense date competition. If we go the former route, the FOOM scenario is not necessarily an intense date competition; it depends on what other factors are at play. For example, maybe there are only a few major AI projects and all of them are pretty socially responsible, so a design is more likely to win if it can be built sooner, but it won’t necessarily win; maybe cooler heads will prevail and build a safer design instead.
--Why is date-competitiveness worth calling a kind of competitiveness at all? Why not just say: “We want our AI safety scheme/design to be cost- and performance-competitive, and also we need to be able to build it fairly quickly compared to the other stuff that gets built.” Well, 1. Even that is clunky and awkward compared to the elegant ”...and also date-competitive.” 2. It really does have the comparative flavor of competition to it; what matters is not how long it takes us to complete our safety scheme, but how long it takes relative to unaligned schemes, and it’s not as simple as just “we need to be first,” rather it’s that sooner is better but doing it later isn’t necessarily game over… 3. It seems to be useful for describing date competitions, which are important to distinguish from situations which are not date competitions or less so. (Aside: A classic criticism of the “Let’s build uploads first, and upload people we trust” strategy is that neuromorphic AI will probably come before uploads. In other words, this strategy is not date-competitive.)
--I’m toying with the idea of adding “alignment-competitiveness” (meaning, as aligned or more aligned than competing systems) and “alignment competition” to the set of definitions. This sounds silly, but it would be conceptually neat, because then we can say: We hope for scenarios in which control of the future is a very intense alignment competition, and we are working hard to make it that way. ”
I’m ambivalent about adding “alignment competitiveness”, since for me competitiveness in the context of AI safety is about asking whether aligned approaches can compete (for different meaning of competing) with unaligned ones.
I had been thinking it is sometimes nice to talk about competitiveness of AI designs more generally, not just alignment schemes. e.g. neuromorphic AI is more date-competitive, cost-competitive, and performance-competitive than uploads, probably. (It might be less date-competitive though).