Peter Wegner has produced dozens of papers over the past few decades arguing that Turing Machines are inadequate models of computation, as computation is actually practiced. To get a sample of this, Google “Wegner computation interaction”.
I had a look, and at a brief glance I didn’t see anything beyond what CSP and CCS were invented for more than 30 years ago. The basic paper defining a type of interaction machine is something that I would have guessed was from the same era, if I didn’t know it was published in 2004.
I say this as someone who worked on this sort of thing a long time ago—in fact, I did my D.Phil. with Hoare (CSP) and a post-doc with Milner (CCS), more than 30 years ago. I moved away from the field and know nothing about developments of recent years, but I am not getting the impression from the 2006 book that Wegner has co-edited on the subject that there has been much. Meanwhile, in the industrial world people have been designing communication protocols and parallel hardware, with or without the benefit of these theoretical researches, for as long as there have been computers.
None of which is to detract from your valid point that thinking of the AI as a TM, and neglecting the effects of its outputs on its inputs, may lead people into the safe oracle fallacy.
I had a look, and at a brief glance I didn’t see anything beyond what CSP and CCS were invented for more than 30 years ago. The basic paper defining a type of interaction machine is something that I would have guessed was from the same era, if I didn’t know it was published in 2004.
I say this as someone who worked on this sort of thing a long time ago—in fact, I did my D.Phil. with Hoare (CSP) and a post-doc with Milner (CCS), more than 30 years ago. I moved away from the field and know nothing about developments of recent years, but I am not getting the impression from the 2006 book that Wegner has co-edited on the subject that there has been much. Meanwhile, in the industrial world people have been designing communication protocols and parallel hardware, with or without the benefit of these theoretical researches, for as long as there have been computers.
None of which is to detract from your valid point that thinking of the AI as a TM, and neglecting the effects of its outputs on its inputs, may lead people into the safe oracle fallacy.