List of lethalities is not by any means a “one stop shop”. If you don’t agree with Eliezer on 90% of the relevant issues, it’s completely unconvincing. For example, in that article he takes as an assumption that an AGI will be godlike level omnipotent, and that it will default to murderism.
If you don’t agree with Eliezer on 90% of the relevant issues, it’s completely unconvincing.
Of course. What kind of miracle are you expecting?
It also doesn’t go into much depth on many of the main counterarguments. And doesn’t go into enough detail that it even gets close to “logically sound”. And it’s not as condensed as I’d like. And it skips over a bunch of background. Still, it’s valuable, and it’s the closest thing to a one-post summary of why Eliezer is pessimistic about the outcome of AGI.
The main value of list of lethalities as a one-stop shop is that you can read it and then be able to point to roughly where you disagree with Eliezer. And this is probably what you want if you’re looking for canonical arguments for AI risk. Then you can look further into that disagreement if you want.
Reading the rest of your comment very charitably: It looks like your disagreements are related to where AGI capability caps out, and whether default goals involve niceness to humans. Great!
If I read your comment more literally, my guess would be that you haven’t read list of lethalities, or are happy misrepresenting positions you disagree with.
he takes as an assumption that an AGI will be godlike level omnipotent
He specifically defines a dangerous intelligence level as around the level required to design and build a nanosystem capable of building a nanosystem (or any of several alternative example capabilities) (In point 3). Maybe your omnipotent gods are lame.
and that it will default to murderism
This is false. Maybe you are referring to how there isn’t any section justifying instrumental convergence? But it does have a link, and it notes that it’s skipping over a bunch of background in that area (-3). That would be a different assumption, but if you’re deliberately misrepresenting it, then that might be the part that you are misrepresenting.
If you’re looking for recent, canonical one-stop-shop, the answer is List of Lethalities.
List of lethalities is not by any means a “one stop shop”. If you don’t agree with Eliezer on 90% of the relevant issues, it’s completely unconvincing. For example, in that article he takes as an assumption that an AGI will be godlike level omnipotent, and that it will default to murderism.
Of course. What kind of miracle are you expecting?
It also doesn’t go into much depth on many of the main counterarguments. And doesn’t go into enough detail that it even gets close to “logically sound”. And it’s not as condensed as I’d like. And it skips over a bunch of background. Still, it’s valuable, and it’s the closest thing to a one-post summary of why Eliezer is pessimistic about the outcome of AGI.
The main value of list of lethalities as a one-stop shop is that you can read it and then be able to point to roughly where you disagree with Eliezer. And this is probably what you want if you’re looking for canonical arguments for AI risk. Then you can look further into that disagreement if you want.
Reading the rest of your comment very charitably: It looks like your disagreements are related to where AGI capability caps out, and whether default goals involve niceness to humans. Great!
If I read your comment more literally, my guess would be that you haven’t read list of lethalities, or are happy misrepresenting positions you disagree with.
He specifically defines a dangerous intelligence level as around the level required to design and build a nanosystem capable of building a nanosystem (or any of several alternative example capabilities) (In point 3). Maybe your omnipotent gods are lame.
This is false. Maybe you are referring to how there isn’t any section justifying instrumental convergence? But it does have a link, and it notes that it’s skipping over a bunch of background in that area (-3). That would be a different assumption, but if you’re deliberately misrepresenting it, then that might be the part that you are misrepresenting.