Lopsidedness: Does AI risk require solving all the pieces, or does it suffice to have an idiot savant, that exceeds human capabilities in only some axes, while still underperforming in others?
Lopsidedness: Does AI risk require solving all the pieces, or does it suffice to have an idiot savant, that exceeds human capabilities in only some axes, while still underperforming in others?