The post answers the first question “Will current approaches scale to AGI?” in the affirmative and then seems to run with that.
I think the post makes a good case that Yudkowsky’s pessimism is not applicable to AIs built with current architectures and scaled-up versions of current architectures.
But it doesn’t address the following cases:
Systems of such architectures
Systems built by systems that are smarter than humans
Such architectures used by actors that do not care about alignment
I believe for these cases, Yudkowsky’s arguments and pessimism still mostly apply. Though some of Robin Hanson’s counterarguments also seem relevant.
The post answers the first question “Will current approaches scale to AGI?” in the affirmative and then seems to run with that.
I think the post makes a good case that Yudkowsky’s pessimism is not applicable to AIs built with current architectures and scaled-up versions of current architectures.
But it doesn’t address the following cases:
Systems of such architectures
Systems built by systems that are smarter than humans
Such architectures used by actors that do not care about alignment
I believe for these cases, Yudkowsky’s arguments and pessimism still mostly apply. Though some of Robin Hanson’s counterarguments also seem relevant.