That would make (human[s] + predictor) in to an optimization process that was powerful beyond the human[s]’s ability to steer. You might see a nice looking prediction, but you won’t understand the value of the details, or the value of the means used to achieve it. (Which would be called trade-offs in a goal directed mind, but nothing weighs them here.)
It also won’t be reliable to look for models in which you are predicted to not hit the Emergency Regret Button As that may just find models in which your regret evaluator is modified.
That would make (human[s] + predictor) in to an optimization process that was powerful beyond the human[s]’s ability to steer. You might see a nice looking prediction, but you won’t understand the value of the details, or the value of the means used to achieve it. (Which would be called trade-offs in a goal directed mind, but nothing weighs them here.)
It also won’t be reliable to look for models in which you are predicted to not hit the Emergency Regret Button As that may just find models in which your regret evaluator is modified.
Is a human equipped with Google an optimization process powerful beyond the human’s ability to steer?
Tell me from China.