Yeah, I was intentionally vague with “the probabilistic nature of things”. I am also thinking about how any AI will have logical uncertainty, uncertainty about the precision of its observations, et cetera, so that as it considers further points in the future, its distribution becomes flatter. And having a non-dualist framework would introduce uncertainty about the agent’s self, its utility function, its memory, …
Yeah, I was intentionally vague with “the probabilistic nature of things”. I am also thinking about how any AI will have logical uncertainty, uncertainty about the precision of its observations, et cetera, so that as it considers further points in the future, its distribution becomes flatter. And having a non-dualist framework would introduce uncertainty about the agent’s self, its utility function, its memory, …