I think your “charitable misinterpretation” is pretty much what trevor is saying: he’s concerned that LW users might become targets for some sort of attack by well-resourced entities (something something military-industrial complex something something GPUs something something AI), and that if multiple LW users are using the same presumably-insecure device that might somehow be induced to damage their health then that’s a serious risk.
I’m not sure exactly what FDA approval would entail, but my guess is that it doesn’t involve the sort of security auditing that would be necessary to allay such concerns.
I think your “charitable misinterpretation” is pretty much what trevor is saying: he’s concerned that LW users might become targets for some sort of attack by well-resourced entities (something something military-industrial complex something something GPUs something something AI), and that if multiple LW users are using the same presumably-insecure device that might somehow be induced to damage their health then that’s a serious risk.
See e.g. https://www.lesswrong.com/posts/pfL6sAjMfRsZjyjsZ/some-basics-of-the-hypercompetence-theory-of-government (“trying to slow the rate of progress risks making you an enemy of the entire AI industry”, “trying to impeding the government and military’s top R&D priorities is basically hitting the problem with a sledgehammer. And it can hit back, many orders of magnitude harder”).
I’m not sure exactly what FDA approval would entail, but my guess is that it doesn’t involve the sort of security auditing that would be necessary to allay such concerns.