Peter Wegner has produced dozens of papers over the past few decades arguing that Turing Machines are inadequate models of computation, as computation is actually practiced. To get a sample of this, Google “Wegner computation interaction”.
I sometimes think that part of the seductiveness of the “safe oracle AI” idea comes from the assumption that the AI really will be like a TM—it will have no interaction with the external world between the reading of the input tape and the writing of the answer. To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
Peter Wegner has produced dozens of papers over the past few decades arguing that Turing Machines are inadequate models of computation, as computation is actually practiced. To get a sample of this, Google “Wegner computation interaction”.
I had a look, and at a brief glance I didn’t see anything beyond what CSP and CCS were invented for more than 30 years ago. The basic paper defining a type of interaction machine is something that I would have guessed was from the same era, if I didn’t know it was published in 2004.
I say this as someone who worked on this sort of thing a long time ago—in fact, I did my D.Phil. with Hoare (CSP) and a post-doc with Milner (CCS), more than 30 years ago. I moved away from the field and know nothing about developments of recent years, but I am not getting the impression from the 2006 book that Wegner has co-edited on the subject that there has been much. Meanwhile, in the industrial world people have been designing communication protocols and parallel hardware, with or without the benefit of these theoretical researches, for as long as there have been computers.
None of which is to detract from your valid point that thinking of the AI as a TM, and neglecting the effects of its outputs on its inputs, may lead people into the safe oracle fallacy.
To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
Except this is not the case for the AI I describe in my post.
The AI I describe in my post cannot make request for anything. It doesn’t need clarification because we don’t ask it question in a natural language at all! So I don’t think you’re criticism apply to this specific model.
I sometimes think that part of the seductiveness of the “safe oracle AI” idea comes from the assumption that the AI really will be like a TM—it will have no interaction with the external world between the reading of the input tape and the writing of the answer. To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
A forecaster already has acutators—its outputs (forecasts).
Its attempts to manipulate the world seem pretty likely to use its existing output channel initially.
Peter Wegner has produced dozens of papers over the past few decades arguing that Turing Machines are inadequate models of computation, as computation is actually practiced. To get a sample of this, Google “Wegner computation interaction”.
I sometimes think that part of the seductiveness of the “safe oracle AI” idea comes from the assumption that the AI really will be like a TM—it will have no interaction with the external world between the reading of the input tape and the writing of the answer. To the contrary, the danger arises because the AI will interact with us in the interim between input and output with requests for clarification, resources, and assistance. That is, it will realize that manipulation of the outside world is a permitted method in achieving its mission.
I had a look, and at a brief glance I didn’t see anything beyond what CSP and CCS were invented for more than 30 years ago. The basic paper defining a type of interaction machine is something that I would have guessed was from the same era, if I didn’t know it was published in 2004.
I say this as someone who worked on this sort of thing a long time ago—in fact, I did my D.Phil. with Hoare (CSP) and a post-doc with Milner (CCS), more than 30 years ago. I moved away from the field and know nothing about developments of recent years, but I am not getting the impression from the 2006 book that Wegner has co-edited on the subject that there has been much. Meanwhile, in the industrial world people have been designing communication protocols and parallel hardware, with or without the benefit of these theoretical researches, for as long as there have been computers.
None of which is to detract from your valid point that thinking of the AI as a TM, and neglecting the effects of its outputs on its inputs, may lead people into the safe oracle fallacy.
Except this is not the case for the AI I describe in my post.
The AI I describe in my post cannot make request for anything. It doesn’t need clarification because we don’t ask it question in a natural language at all! So I don’t think you’re criticism apply to this specific model.
A forecaster already has acutators—its outputs (forecasts).
Its attempts to manipulate the world seem pretty likely to use its existing output channel initially.