Vaniver wasn’t talking about Harry’s evaluation of future outcomes, he was talking about Harry’s predictions of future thoughts that future people would have. That’s why Vaniver said “why does he think the future will hold life to be precious”, etc. “He think the future will” clearly refers to a prediction made by Harry.
So your response is only relevant if you were trying to say Harry’s predictions were tainted by his value judgements. But I don’t think that’s what you were saying, correct?
Intentions have no impact on the future, only actions do. Unless you want to pretend that the neurons firing around in your brain are causally significant (in terms of effects to the outside world) in any substantive way, which would be dumb. Harry “declaring” that he considers death unacceptable and intends to stop it is “insufficient” to cause immortality. He would need to take actions like making an immortality pill and giving it to everyone, or something.
Vaniver wasn’t talking about Harry’s evaluation of future outcomes, he was talking about Harry’s predictions of future thoughts that future people would have. That’s why Vaniver said “why does he think the future will hold life to be precious”, etc. “He think the future will” clearly refers to a prediction made by Harry.
I believe you are incorrectly modelling the way Harry thinks and misunderstand the implications of the words Harry has uttered. The implicit prediction is conditional. On, for example, not catastrophic failure and extinction. To illustrate the position: Harry would not change the thinking here or the degree to which his meaning is valid if he happened to believe that there was a 95% chance of human extinction instead of any possible evaluation of future humans.
So your response is only relevant if you were trying to say Harry’s predictions were tainted by his value judgements. But I don’t think that’s what you were saying, correct?
That is not my primary point. I would perhaps also say that this is likely. Or at least that he uses overconfident rhetoric when expressing himself, to a degree that my instincts warn me to disaffiliate.
Intentions have no impact on the future, only actions do. Unless you want to pretend that the neurons firing around in your brain are causally significant (in terms of effects to the outside world) in any substantive way, which would be dumb.
I assert the thing that you say is dumb. My model of causality doesn’t consider atoms inside the computational structure of powerful optimization agents to be qualitatively different in causal significance to atoms outside of such entities. Neurons firing around in powerful brains are among the most causally significant things in existence.
Vaniver wasn’t talking about Harry’s evaluation of future outcomes, he was talking about Harry’s predictions of future thoughts that future people would have. That’s why Vaniver said “why does he think the future will hold life to be precious”, etc. “He think the future will” clearly refers to a prediction made by Harry.
So your response is only relevant if you were trying to say Harry’s predictions were tainted by his value judgements. But I don’t think that’s what you were saying, correct?
Intentions have no impact on the future, only actions do. Unless you want to pretend that the neurons firing around in your brain are causally significant (in terms of effects to the outside world) in any substantive way, which would be dumb. Harry “declaring” that he considers death unacceptable and intends to stop it is “insufficient” to cause immortality. He would need to take actions like making an immortality pill and giving it to everyone, or something.
I believe you are incorrectly modelling the way Harry thinks and misunderstand the implications of the words Harry has uttered. The implicit prediction is conditional. On, for example, not catastrophic failure and extinction. To illustrate the position: Harry would not change the thinking here or the degree to which his meaning is valid if he happened to believe that there was a 95% chance of human extinction instead of any possible evaluation of future humans.
That is not my primary point. I would perhaps also say that this is likely. Or at least that he uses overconfident rhetoric when expressing himself, to a degree that my instincts warn me to disaffiliate.
I assert the thing that you say is dumb. My model of causality doesn’t consider atoms inside the computational structure of powerful optimization agents to be qualitatively different in causal significance to atoms outside of such entities. Neurons firing around in powerful brains are among the most causally significant things in existence.