I can’t recall specific names / specific treatments of this, but I’m also relatively sure that it’s kinda familiar, so I suspect that something exists there. Problem is that, in some sense, it falls right between areas.
On the practical side, people don’t really care where the problem originates, they just want to fix it. (And people are swimming in abstraction layers and navigate them intuitively, so a “this is one layer up” doesn’t really stand out, as it’s still the bottom layer of some other abstraction.) So from the perspective of computer security, it just falls into the big bucket of what’s called a “denial of service” vulnerability (among lots of other things like just temporarily slowing things down, or making a system do 2^51 compute steps (i.e. locking it up effectively forever but not “truly” forever) or making it drown in traffic from outside), but there’s generally no category separation between “possible bugs” (that just happen to be in one specific implementation) and “necessary bugs” (the “logic bugs” that are in the spec, where every conforming program will end up with the same problem.)
On the theoretical side, you mostly work with bounded recursion instead of possibly unbounded loops, so the problem is more-or-less impossible. (And while Turing machines exist on the theory side too, working with them sucks so it doesn’t really happen a lot, so people don’t really notice problems that occur when working with them. And so it’s mostly bounded recursion.) As an example, if you write your program in Coq, you’re forced to prove that every recursive computation terminates in order to be allowed to compile (and run) your program. So as soon as you care even a little about verification, the problem basically disappears / becomes invisible. In theory, the path to reaching a fixed point by reacting to each input in exactly the same way is still open (by messing up your state), but that’s very unlikely to hit by chance (especially if you write in the typical stateless style), so people might not stumble on that ever, so the chances of thorough exploration are rather low too. (Still feels like there was something, somewhere...)
In other areas:
From the perspective of dynamical systems, it’s just a fixed point.
In temporal logic, it’s something that would be happening “eventually forever”.
If you can’t find anything after using that to find more keywords, I’d suggest you ask someone at the Santa Fe Institute, they do lots of dynamical systems stuff, and if it exists at all, there’s a pretty good chance that someone there has heard of it. (I’ve seen them mention kinda nearby concepts in one of the “Complexity Explorer” courses. According to that page, Cristopher Moore does a computability etc. course soon-ish, so he should either be the right person to ask or know who to forward it to.)
I can’t recall specific names / specific treatments of this, but I’m also relatively sure that it’s kinda familiar, so I suspect that something exists there. Problem is that, in some sense, it falls right between areas.
On the practical side, people don’t really care where the problem originates, they just want to fix it. (And people are swimming in abstraction layers and navigate them intuitively, so a “this is one layer up” doesn’t really stand out, as it’s still the bottom layer of some other abstraction.) So from the perspective of computer security, it just falls into the big bucket of what’s called a “denial of service” vulnerability (among lots of other things like just temporarily slowing things down, or making a system do 2^51 compute steps (i.e. locking it up effectively forever but not “truly” forever) or making it drown in traffic from outside), but there’s generally no category separation between “possible bugs” (that just happen to be in one specific implementation) and “necessary bugs” (the “logic bugs” that are in the spec, where every conforming program will end up with the same problem.)
On the theoretical side, you mostly work with bounded recursion instead of possibly unbounded loops, so the problem is more-or-less impossible. (And while Turing machines exist on the theory side too, working with them sucks so it doesn’t really happen a lot, so people don’t really notice problems that occur when working with them. And so it’s mostly bounded recursion.) As an example, if you write your program in Coq, you’re forced to prove that every recursive computation terminates in order to be allowed to compile (and run) your program. So as soon as you care even a little about verification, the problem basically disappears / becomes invisible. In theory, the path to reaching a fixed point by reacting to each input in exactly the same way is still open (by messing up your state), but that’s very unlikely to hit by chance (especially if you write in the typical stateless style), so people might not stumble on that ever, so the chances of thorough exploration are rather low too. (Still feels like there was something, somewhere...)
In other areas:
From the perspective of dynamical systems, it’s just a fixed point.
In temporal logic, it’s something that would be happening “eventually forever”.
If you can’t find anything after using that to find more keywords, I’d suggest you ask someone at the Santa Fe Institute, they do lots of dynamical systems stuff, and if it exists at all, there’s a pretty good chance that someone there has heard of it. (I’ve seen them mention kinda nearby concepts in one of the “Complexity Explorer” courses. According to that page, Cristopher Moore does a computability etc. course soon-ish, so he should either be the right person to ask or know who to forward it to.)