I like the combination of conciseness and thoroughness you’ve achieved with this.
There are a couple of specific parts I’ll quibble about:
Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.
“The Automation of Science” section seems weaker to me than the others, perhaps even superfluous. I think the line I’ve quoted is the crux of the problem; I highly doubt that the development of AGI will be driven by any such motivations.
Will we be able to build an artificial general intelligence? Yes, sooner or later.
I assign a high probability to the proposition that we will be able to build AGI, but I think a straight “yes” is too strong here.
Agreed—AGI will probably not be developed with the aim of improving science.
I also want to quibble about this:
Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves
Since most readers don’t want to be replaced, at least in one interpretation of that term, this line sticks in the throat and breaks the flow. The natural response is something like “logical? According to whose goals?”
I like the combination of conciseness and thoroughness you’ve achieved with this.
There are a couple of specific parts I’ll quibble about:
“The Automation of Science” section seems weaker to me than the others, perhaps even superfluous. I think the line I’ve quoted is the crux of the problem; I highly doubt that the development of AGI will be driven by any such motivations.
I assign a high probability to the proposition that we will be able to build AGI, but I think a straight “yes” is too strong here.
Agreed—AGI will probably not be developed with the aim of improving science.
I also want to quibble about this:
Since most readers don’t want to be replaced, at least in one interpretation of that term, this line sticks in the throat and breaks the flow. The natural response is something like “logical? According to whose goals?”