I don’t actually understand this, and I feel like it needs to be explained a lot more clearly.
“Whatever the fundamental physical reality of a moment of experience I’m suggesting that that reality changes as little as it can.”—What does this mean? Using the word “can” here implies some sort of intelligence “choosing” something. Was that intended? If so, what is doing the choosing? If not, what is causing this property of reality?
“Because of this human beings are really just keeping track of themselves as models of objective reality, and their ultimate aim is in fact to know and embody the entirety of objective reality (not that any of them will succeed).”—Human beings don’t seem to act in the way I would expect them to act if this was their goal. For instance, why do I choose to eat foods I know are tasty and take routes to work I’ve already taken, instead of seeking out new experiences every time and widening my understanding of objective reality? What difference would I expect to see in human behaviour if this ultimate aim was false?
“This sort of thinking becomes a next to nothing, but not quite nothing, requirement for any mind, regardless of how vastly removed from another mind it is, to have altruistic concern for any other mind in the absolute longest term (because their fully ultimate aim would have to be the exact same).”—I don’t understand how this point follows from your previous ones, or even what the point actually is. Are you saying “All minds have the same fundamental aim, therefore we should be altruistic to each other”?
I don’t actually understand this, and I feel like it needs to be explained a lot more clearly.
“Whatever the fundamental physical reality of a moment of experience I’m suggesting that that reality changes as little as it can.”—What does this mean? Using the word “can” here implies some sort of intelligence “choosing” something. Was that intended? If so, what is doing the choosing? If not, what is causing this property of reality?
“Because of this human beings are really just keeping track of themselves as models of objective reality, and their ultimate aim is in fact to know and embody the entirety of objective reality (not that any of them will succeed).”—Human beings don’t seem to act in the way I would expect them to act if this was their goal. For instance, why do I choose to eat foods I know are tasty and take routes to work I’ve already taken, instead of seeking out new experiences every time and widening my understanding of objective reality? What difference would I expect to see in human behaviour if this ultimate aim was false?
“This sort of thinking becomes a next to nothing, but not quite nothing, requirement for any mind, regardless of how vastly removed from another mind it is, to have altruistic concern for any other mind in the absolute longest term (because their fully ultimate aim would have to be the exact same).”—I don’t understand how this point follows from your previous ones, or even what the point actually is. Are you saying “All minds have the same fundamental aim, therefore we should be altruistic to each other”?