I sometimes think that part of the seductiveness of the “safe oracle AI” idea comes from the assumption that the AI really will be like a TM—it will have no interaction with the external world between the reading of the input tape and the writing of the answer. To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
A forecaster already has acutators—its outputs (forecasts).
Its attempts to manipulate the world seem pretty likely to use its existing output channel initially.
A forecaster already has acutators—its outputs (forecasts).
Its attempts to manipulate the world seem pretty likely to use its existing output channel initially.