I don’t think a lack of IQ is the reason we’ve been failing at making AI sensibly. Rather, it’s a lack of good incentive making. Making an AI recklessly is current much more profitable than not doing do- which imo, shows a flaw in the efforts which have gone towards making AI safe—as in, not accepting that some people have a very different mindset/beliefs/core values and figuring out a structure/argument that would incentivize people of a broad range of mindsets.
In a world where the median IQ is 143, the people at +3σ are at 188. They might succeed where the median fails.
I don’t think a lack of IQ is the reason we’ve been failing at making AI sensibly. Rather, it’s a lack of good incentive making.
Making an AI recklessly is current much more profitable than not doing do- which imo, shows a flaw in the efforts which have gone towards making AI safe—as in, not accepting that some people have a very different mindset/beliefs/core values and figuring out a structure/argument that would incentivize people of a broad range of mindsets.