You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.
The basic athematic of AI risk is, [orthogonality
thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]
These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.
Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.
I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.
You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.
The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]
These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.
Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.
I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.