Current discourse around AI safety (at least among people who haven’t missed) has a pretty dark, pessimistic tone—for good reason, because we’re getting closer to technology that could accidentally do massive harm to humanity.
But when people / groups feel pessimistic, it’s hard to get good work done—even when that pessimism is grounded in the real-world facts.
I think we need to develop an optimistic, but realistic point of view—acknowledging the difficulty of where we are, but nonetheless being hopeful and full of energy towards finding the solution. Because AI alignment can be solved, we just actually have to put in the effort to solve it, and maybe a lot faster than we are currently prepared to.
Indeed. Good SciFi does both for me—terror of being a passenger in this train wreck and ideas for how heroes can derail the AI commerce train or hack the system to switch tracks for the public transit passenger train. Upgrade and Recursion did that for me this summer.
Current discourse around AI safety (at least among people who haven’t missed) has a pretty dark, pessimistic tone—for good reason, because we’re getting closer to technology that could accidentally do massive harm to humanity.
But when people / groups feel pessimistic, it’s hard to get good work done—even when that pessimism is grounded in the real-world facts.
I think we need to develop an optimistic, but realistic point of view—acknowledging the difficulty of where we are, but nonetheless being hopeful and full of energy towards finding the solution. Because AI alignment can be solved, we just actually have to put in the effort to solve it, and maybe a lot faster than we are currently prepared to.
Indeed. Good SciFi does both for me—terror of being a passenger in this train wreck and ideas for how heroes can derail the AI commerce train or hack the system to switch tracks for the public transit passenger train. Upgrade and Recursion did that for me this summer.