I’m not sure it’s productive to engage with this stuff. Taking a GRAND STAND may feel good, but in many cases people end up becoming useful foils. Block liberally, don’t engage, focus on what actually matters.
I’m not necessarily advocating for direct engagement! If engagement with this stuff won’t decrease AI risk, then I don’t want to engage. If it does, then I do. Some of these people/orgs are influential (Venkatesh Rao, HuggingFace), so unfortunately, their opinions do actually matter. As nice as it would feel to ignore the haters, public opinion is in fact a strategic asset when it comes to actually implementing AI safety proposals at major labs.
I’m not sure it’s productive to engage with this stuff. Taking a GRAND STAND may feel good, but in many cases people end up becoming useful foils. Block liberally, don’t engage, focus on what actually matters.
I’m not necessarily advocating for direct engagement! If engagement with this stuff won’t decrease AI risk, then I don’t want to engage. If it does, then I do. Some of these people/orgs are influential (Venkatesh Rao, HuggingFace), so unfortunately, their opinions do actually matter. As nice as it would feel to ignore the haters, public opinion is in fact a strategic asset when it comes to actually implementing AI safety proposals at major labs.
Do you have any evidence that Venkatesh Rao is influential? I’ve never seen him quoted by anyone outside the rationality community.