“that is, without relying on other people’s arguments” doesn’t feel quite right to me, since obviously a bunch of these arguments have been made before. It’s more like: without taking any previous claims for granted.
Changed, though the way I use words those phrases mean the same thing.
Your list of 3 differs from my list of 3.
Yeah this was not meant to be a direct translation of your list. (Your list of 3 is encompassed by my first and third point.) You mentioned six things:
more compute, better algorithms, and better training data
and
replication, cultural learning, and recursive improvement
which I wanted to condense. (The model size point was meant to capture the compute case.) I did have a lot of trouble understanding what the point of that section was, though, so it’s plausible that I’ve condensed it poorly for whatever point you were making there.
Perhaps the best solution is to just delete that particular paragraph? As far as I can tell, it’s not relevant to the rest of the arguments, and this summary is already fairly long and somewhat disjointed.
I did have a lot of trouble understanding what the point of that section was, though, so it’s plausible that I’ve condensed it poorly for whatever point you were making there.
My thinking here is something like: humans became smart via cultural evolution, but standard AI safety arguments ignore this fact. When we think about AI progress from this perspective though, we get a different picture of the driving forces during the takeoff period. In particular, the three things I’ve listed are all ways that interactions between AGIs will be crucial to their capabilities, in addition to the three factors which are currently crucial for AI development.
Changed, though the way I use words those phrases mean the same thing.
Yeah this was not meant to be a direct translation of your list. (Your list of 3 is encompassed by my first and third point.) You mentioned six things:
and
which I wanted to condense. (The model size point was meant to capture the compute case.) I did have a lot of trouble understanding what the point of that section was, though, so it’s plausible that I’ve condensed it poorly for whatever point you were making there.
Perhaps the best solution is to just delete that particular paragraph? As far as I can tell, it’s not relevant to the rest of the arguments, and this summary is already fairly long and somewhat disjointed.
My thinking here is something like: humans became smart via cultural evolution, but standard AI safety arguments ignore this fact. When we think about AI progress from this perspective though, we get a different picture of the driving forces during the takeoff period. In particular, the three things I’ve listed are all ways that interactions between AGIs will be crucial to their capabilities, in addition to the three factors which are currently crucial for AI development.
Will edit to make this clearer.