I think someone who has understood and taken seriously my last two posts is likely to have a better understanding of the dangers of an unfriendly AI than most AGI researchers, and is therefore less likely to behave recklessly (the other likely possibility is that they will think that I am describing ridiculous and irrelevant precautions, in which case they were probably going to behave recklessly already).
You don’t seem to have attempted to make a case for the real world relevance of any of this. It has been more a case of: here’s what you could do—if you were really, really paranoid. Since you don’t seem to make an argument about relevance to the real world in the first place, it isn’t obvious why you would be expecting people to update their views on that topic.
If people don’t update their views, then I’m already golden and this discussion can do no damage. If people think that these arguments are too paranoid then they are already not paranoid enough and this discussion can do no damage. The fear is that a person who would otherwise be responsible might suspect that boxing is theoretically possible, update their beliefs about the dangers of uFAI (the theoretical possibility of a thing is undoubtedly correlated to the practical possibility), and therefore do less to avoid it. Even if the odds of this are small, it would be an issue. I concluded that the expected benefit I cite is small but large enough to offset this risk.
You don’t seem to have attempted to make a case for the real world relevance of any of this. It has been more a case of: here’s what you could do—if you were really, really paranoid. Since you don’t seem to make an argument about relevance to the real world in the first place, it isn’t obvious why you would be expecting people to update their views on that topic.
If people don’t update their views, then I’m already golden and this discussion can do no damage. If people think that these arguments are too paranoid then they are already not paranoid enough and this discussion can do no damage. The fear is that a person who would otherwise be responsible might suspect that boxing is theoretically possible, update their beliefs about the dangers of uFAI (the theoretical possibility of a thing is undoubtedly correlated to the practical possibility), and therefore do less to avoid it. Even if the odds of this are small, it would be an issue. I concluded that the expected benefit I cite is small but large enough to offset this risk.