My very similar post had a somewhat better reception, although certainly people disagreed. I think there are two things going on. Firstly, Lucas’s post, and perhaps my post, could have been better written.
Secondly, and this is just my opinion, people coming from the orthodox alignment position (EY) have become obsessed with the need for a pure software solution, and have no interest in shoring up civilization’s general defenses by banning the most dangerous technologies that an AI could use. As I understand, they feel that focus on how the AI does the deed is a misconception, because the AI will be so smart that it could kill you with a butter knife and no hands.
Possibly the crux here is related to what is a promising path, what is a waste of time, and how much collective activism effort we have left, given time on the clock. Let me know if you disagree with this model.
Yes, the linked post makes a lot of sense: wet labs should be heavily regulated.
Most of the disagreement here is based on two premises:
A: Other vectors (wet labs, etc.) present a greater threat. Maybe, though intelligent weapons are the most clearly misanthropic variant of AI.
B: AI will become so powerful, so quickly, that limiting its vectors of attack will not be enough.
If B is true, the only solution is a general ban on AI research. However, this would need to be a coordinated effort across the globe. There is far more support for halting intelligent weapons development than for a general ban. A general ban could come as a subsequent agreement.
My very similar post had a somewhat better reception, although certainly people disagreed. I think there are two things going on. Firstly, Lucas’s post, and perhaps my post, could have been better written.
Secondly, and this is just my opinion, people coming from the orthodox alignment position (EY) have become obsessed with the need for a pure software solution, and have no interest in shoring up civilization’s general defenses by banning the most dangerous technologies that an AI could use. As I understand, they feel that focus on how the AI does the deed is a misconception, because the AI will be so smart that it could kill you with a butter knife and no hands.
Possibly the crux here is related to what is a promising path, what is a waste of time, and how much collective activism effort we have left, given time on the clock. Let me know if you disagree with this model.
Yes, the linked post makes a lot of sense: wet labs should be heavily regulated.
Most of the disagreement here is based on two premises:
A: Other vectors (wet labs, etc.) present a greater threat. Maybe, though intelligent weapons are the most clearly misanthropic variant of AI.
B: AI will become so powerful, so quickly, that limiting its vectors of attack will not be enough.
If B is true, the only solution is a general ban on AI research. However, this would need to be a coordinated effort across the globe. There is far more support for halting intelligent weapons development than for a general ban. A general ban could come as a subsequent agreement.