I think there’s a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community—an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people’s politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything—it’s something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem—but those with an above average level of trust in free markets weren’t so averse.
Such people don’t necessarily have conflicts of interest (though some may, and that’s another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.
I mostly agree with premises 1, 2, and 3, but I don’t see how the conclusion follows.
It is possible for things to be hard to influence and yet still worth it to try to influence them.
(Note that the $30 million grant was not an endorsement and was instead a partnership (e.g. it came with a board seat), see Buck’s comment.)
(Ex-post, I think this endeavour was probably net negative, though I’m pretty unsure and ex-ante I currently think it seems great.)
I think there’s a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community—an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people’s politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything—it’s something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem—but those with an above average level of trust in free markets weren’t so averse.
Such people don’t necessarily have conflicts of interest (though some may, and that’s another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.