Do you propose that humans could, if not achieve, then get much closer to the efficient evidence use of a hypothetical super-AI? Lets say, savage to Einstein in a lifetime, assuming said savage starts out pre-trained in Bayescraft.
Do you propose that humans could, if not achieve, then get much closer to the efficient evidence use of a hypothetical super-AI? Lets say, savage to Einstein in a lifetime, assuming said savage starts out pre-trained in Bayescraft.