A good metaphor is a cliff. A cliff poses a risk in that it is physically possible to drive over it. In the same way, it may be physically possible to build a very dangerous AI. But nobody wants to do that, and—in my view—it looks quite avoidable.
That’s sounds naive and gives the impression that you haven’t taken the time to understand the AI risk concerns. You provide no arguments besides the fact that you don’t see the problem of AI risk.
The prevailing wisdom in this community is that most GAI designs are going to be unsafe and a lot of the unsafety isn’t obvious beforehand. There’s the belief that if the value alignment problem isn’t solved before human level AGI, that means the end of humanity.
That’s sounds naive and gives the impression that you haven’t taken the time to understand the AI risk concerns. You provide no arguments besides the fact that you don’t see the problem of AI risk.
The prevailing wisdom in this community is that most GAI designs are going to be unsafe and a lot of the unsafety isn’t obvious beforehand. There’s the belief that if the value alignment problem isn’t solved before human level AGI, that means the end of humanity.