“that is, without relying on other people’s arguments” doesn’t feel quite right to me, since obviously a bunch of these arguments have been made before. It’s more like: without taking any previous claims for granted.
“there are then three key advantages of an AI system that would allow it to go further” Your list of 3 differs from my list of 3. Also, my list is not of key advantages, but of features which don’t currently contribute to AI progress but will after we’ve got human-level AGI. I think that AIs will also have advantages over humans in data, compute and algorithms, which are the features that currently contribute to AI progress; and if I had to pick, I’d say data+compute+algorithms are more of an advantage than replication+cultural learning+recursive improvement. But I focus on the latter because they haven’t been discussed as much.
“that is, without relying on other people’s arguments” doesn’t feel quite right to me, since obviously a bunch of these arguments have been made before. It’s more like: without taking any previous claims for granted.
Changed, though the way I use words those phrases mean the same thing.
Your list of 3 differs from my list of 3.
Yeah this was not meant to be a direct translation of your list. (Your list of 3 is encompassed by my first and third point.) You mentioned six things:
more compute, better algorithms, and better training data
and
replication, cultural learning, and recursive improvement
which I wanted to condense. (The model size point was meant to capture the compute case.) I did have a lot of trouble understanding what the point of that section was, though, so it’s plausible that I’ve condensed it poorly for whatever point you were making there.
Perhaps the best solution is to just delete that particular paragraph? As far as I can tell, it’s not relevant to the rest of the arguments, and this summary is already fairly long and somewhat disjointed.
I did have a lot of trouble understanding what the point of that section was, though, so it’s plausible that I’ve condensed it poorly for whatever point you were making there.
My thinking here is something like: humans became smart via cultural evolution, but standard AI safety arguments ignore this fact. When we think about AI progress from this perspective though, we get a different picture of the driving forces during the takeoff period. In particular, the three things I’ve listed are all ways that interactions between AGIs will be crucial to their capabilities, in addition to the three factors which are currently crucial for AI development.
Thanks! Good summary. A couple of quick points:
“that is, without relying on other people’s arguments” doesn’t feel quite right to me, since obviously a bunch of these arguments have been made before. It’s more like: without taking any previous claims for granted.
“there are then three key advantages of an AI system that would allow it to go further” Your list of 3 differs from my list of 3. Also, my list is not of key advantages, but of features which don’t currently contribute to AI progress but will after we’ve got human-level AGI. I think that AIs will also have advantages over humans in data, compute and algorithms, which are the features that currently contribute to AI progress; and if I had to pick, I’d say data+compute+algorithms are more of an advantage than replication+cultural learning+recursive improvement. But I focus on the latter because they haven’t been discussed as much.
Changed, though the way I use words those phrases mean the same thing.
Yeah this was not meant to be a direct translation of your list. (Your list of 3 is encompassed by my first and third point.) You mentioned six things:
and
which I wanted to condense. (The model size point was meant to capture the compute case.) I did have a lot of trouble understanding what the point of that section was, though, so it’s plausible that I’ve condensed it poorly for whatever point you were making there.
Perhaps the best solution is to just delete that particular paragraph? As far as I can tell, it’s not relevant to the rest of the arguments, and this summary is already fairly long and somewhat disjointed.
My thinking here is something like: humans became smart via cultural evolution, but standard AI safety arguments ignore this fact. When we think about AI progress from this perspective though, we get a different picture of the driving forces during the takeoff period. In particular, the three things I’ve listed are all ways that interactions between AGIs will be crucial to their capabilities, in addition to the three factors which are currently crucial for AI development.
Will edit to make this clearer.