For example, it is a logical contradiction for someone to predict my actions in advance (and tell me about it),
Again, this is not a logical contradiction. You do not have a clear understanding of what the concept entails.It doesn’t mean ‘sometimes impractical’ or ‘often people adapt to avoid it’.
No, this really would be a logical contradiction if the agent being predicted does implement the stated algorithm (and won’t override it when something more important is at stake). It just has nothing to do with self-improvement, for which predicting abstract properties of specific algorithms is what matters; much like Rice’s theorem doesn’t mean we can’t prove that specific programs output pi (e.g.).
No, this really would be a logical contradiction if the agent being predicted does implement the stated algorithm
No, it is not a logical contradiction. The fact that someone can implement a stupid algorithm does not make the claim “it is a logical contradiction for someone to predict my actions in advance and tell me about it”. Just because someone could implement a stupid algorithm for decision making or a naive algorithm for prediction (don’t know when to shut up) doesn’t mean you can make that general claim. Not even close.
Your argument would probably apply if I were refuting a different but somewhat related assertion.
No, it is not a logical contradiction. The fact that someone can implement a stupid algorithm does not make the claim “it is a logical contradiction for someone to predict my actions in advance and tell me about it”. Just because someone could implement a stupid algorithm for decision making or a naive algorithm for prediction (don’t know when to shut up) doesn’t mean you can make that general claim. Not even close.
It does mean you can make a general claim analogous to Rice’s theorem / the undecidability of the halting problem — not that such a claim is incredibly interesting for our purposes.
Your argument would probably apply if I were refuting a different but somewhat related assertion.
Point taken; it doesn’t seem like we actually disagree about anything.
It does mean you can make a general claim analogous to Rice’s theorem / the undecidability of the halting problem — not that such a claim is incredibly interesting for our purposes.
The cache of this conversation is buried somewhat in my brain but I think there is something to what you say here.
Again, this is not a logical contradiction. You do not have a clear understanding of what the concept entails.It doesn’t mean ‘sometimes impractical’ or ‘often people adapt to avoid it’.
No, this really would be a logical contradiction if the agent being predicted does implement the stated algorithm (and won’t override it when something more important is at stake). It just has nothing to do with self-improvement, for which predicting abstract properties of specific algorithms is what matters; much like Rice’s theorem doesn’t mean we can’t prove that specific programs output pi (e.g.).
No, it is not a logical contradiction. The fact that someone can implement a stupid algorithm does not make the claim “it is a logical contradiction for someone to predict my actions in advance and tell me about it”. Just because someone could implement a stupid algorithm for decision making or a naive algorithm for prediction (don’t know when to shut up) doesn’t mean you can make that general claim. Not even close.
Your argument would probably apply if I were refuting a different but somewhat related assertion.
It does mean you can make a general claim analogous to Rice’s theorem / the undecidability of the halting problem — not that such a claim is incredibly interesting for our purposes.
Point taken; it doesn’t seem like we actually disagree about anything.
The cache of this conversation is buried somewhat in my brain but I think there is something to what you say here.