Yes, this is a good point. Aimability Research increases the kurtosis of the AI outcome distribution, making both the right tail (paradise) and the left tail (total annihilation) heavier, and reducing the so-so outcomes in the center.
Only Goalcrafting Research can change the relative weights.
The aspect of aimability where an AI becomes able to want something in particular consistently improves capabilities, and improved capabilities make AI matter a lot more. This might happen without ability to aim an AI where you want it aimed, another key aspect. Without the latter aspect, aimability is not “solved”, yet AIs become dangerous.
Yes, this is a good point. Aimability Research increases the kurtosis of the AI outcome distribution, making both the right tail (paradise) and the left tail (total annihilation) heavier, and reducing the so-so outcomes in the center.
Only Goalcrafting Research can change the relative weights.
The aspect of aimability where an AI becomes able to want something in particular consistently improves capabilities, and improved capabilities make AI matter a lot more. This might happen without ability to aim an AI where you want it aimed, another key aspect. Without the latter aspect, aimability is not “solved”, yet AIs become dangerous.
Yes, good point. We might have something like “Self Aimability” for AI before we have the ability to set the point of aim.