This is an interesting perspective on the AI safety problem. I really like the ethos of this post, where there isn’t a huge opposition between AI capabilities and AI safety, but instead we are simply trying to figure out how to use the (helpful!) capabilities developed by AI researchers to do useful things.
If I think about this from the perspective of reducing existential risk, it seems like you would also need to make the argument that AI systems are unlikely to pose a great threat before they are human-level (a claim I mostly agree with), or that the solutions will generalize to sub-human-level AI systems. Is there a reason this isn’t in the post? I worry that I’m not properly understanding the motivations or generators behind the post.
This is an interesting perspective on the AI safety problem. I really like the ethos of this post, where there isn’t a huge opposition between AI capabilities and AI safety, but instead we are simply trying to figure out how to use the (helpful!) capabilities developed by AI researchers to do useful things.
If I think about this from the perspective of reducing existential risk, it seems like you would also need to make the argument that AI systems are unlikely to pose a great threat before they are human-level (a claim I mostly agree with), or that the solutions will generalize to sub-human-level AI systems. Is there a reason this isn’t in the post? I worry that I’m not properly understanding the motivations or generators behind the post.