If you were unaware, every example in the parent of other types of ‘lethal’ has the possibility of eliminating all human life. And not in a hand wavey sense either, truly 100%, the same death rate as the worse case AGI outcomes.
Which means that to a knowledgeable reader the wording is unpersuasive since the point is made before it’s been established there’s potential for an even worse outcome than 100% extinction.
This shouldn’t be too hard to do since this topic was regularly discussed on LW… dust specks, and simulated tortures, etc.
Idk why neither you or Eliezer include the obvious supporting points or links to someone who does beforehand, or at least not buried way past the assertion, since it seems you are trying to reinforce his points and Eliezer ostensibly wanted to write a summary to begin with for the non-expert reader.
If there’s a new essay style that I didn’t get the memo about to put the weak arguments at the beginning and stronger ones near the end then I could see why it was written in such a way.
For the rest of your points I see the same mistake of strong assertions without equally strong evidence to back it up.
For example, none of the posts from the regulars I’ve seen on LW assert, without any hedging, that there’s a 100% chance of human extinction due to any arbitrary Strong AI.
I’ve seen a few made that there’s a 100% chance Clippy would do such if Clippy arose first, though even those are somewhat iffy. And definitely none saying there’s a 100% chance Clippy, and only Clippy, would arise and reach an equilibrium end state.
If you were unaware, every example in the parent of other types of ‘lethal’ has the possibility of eliminating all human life. And not in a hand wavey sense either, truly 100%, the same death rate as the worse case AGI outcomes.
Which means that to a knowledgeable reader the wording is unpersuasive since the point is made before it’s been established there’s potential for an even worse outcome than 100% extinction.
This shouldn’t be too hard to do since this topic was regularly discussed on LW… dust specks, and simulated tortures, etc.
Idk why neither you or Eliezer include the obvious supporting points or links to someone who does beforehand, or at least not buried way past the assertion, since it seems you are trying to reinforce his points and Eliezer ostensibly wanted to write a summary to begin with for the non-expert reader.
If there’s a new essay style that I didn’t get the memo about to put the weak arguments at the beginning and stronger ones near the end then I could see why it was written in such a way.
For the rest of your points I see the same mistake of strong assertions without equally strong evidence to back it up.
For example, none of the posts from the regulars I’ve seen on LW assert, without any hedging, that there’s a 100% chance of human extinction due to any arbitrary Strong AI.
I’ve seen a few made that there’s a 100% chance Clippy would do such if Clippy arose first, though even those are somewhat iffy. And definitely none saying there’s a 100% chance Clippy, and only Clippy, would arise and reach an equilibrium end state.
If you know of any such please provide the link.