I think there’s a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community—an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people’s politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything—it’s something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem—but those with an above average level of trust in free markets weren’t so averse.
Such people don’t necessarily have conflicts of interest (though some may, and that’s another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.
I think there’s a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community—an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people’s politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything—it’s something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem—but those with an above average level of trust in free markets weren’t so averse.
Such people don’t necessarily have conflicts of interest (though some may, and that’s another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.