Not having heard your argument against “Describing …” yet, but assuming you believe some to exist, I estimate the chance of me still believing it after your argument at 0.6.
Now for guessing the two problems:
The first possible problem will be describing “mass” and “energy” to a system which basically only has sensor readings. However, if we can describe concepts like “human” or “freedom”, I expect descriptions of matter and energy to be simpler (even though 10.000 years ago, telling somebody about “humans” was easier than telling them about mass but that was not the same concept of “humans” we would actually like to describe). And for “mass” and “energy” the physicists already have at quite formal descriptions.
One other problem is that mass and energy might not be contained within a certain part of space, as per physics, it is just the probability of it having an effect outside some space going down to pretty much zero the greater the distance.
Thus removing all energy and matter somewhere might produce subtle effects somewhere totally different . However I do expect these effects to be so subtle not even to matter to the AI because they become smaller than the local quantum noise for very short distances already.
Regarding the condescending: “I say this...” I would have liked it more if you would have stated explicitly that your preference originates from a wish to further my learning. I have no business optimizing your value function. Anyway, I operate by Crocker’s Rules.
I don’t know if I’m thinking about what Robin’s after but the statement at issue strikes me as giving neither necessary nor sufficient conditions for destroying agents in any given part of space. If I’m on the same page as him you’re overthinking it.
I fail to understand the sentence about overthinking. Mind to explain?
As for the condition of removing all energy and mass in a part of space not being sufficient to destroy all agents therein, I cannot see the error. Do you have an example of an agent which would continue to exist in those circumstances?
That the condition is not necessary is true: I can shoot you, you die. No need to remove much mass or energy from the part of space you occupy. However we don’t need a necessary condition, only a sufficient one.
Well yes we don’t need a necessary condition for your idea but presumably if we want to make even a passing attempt at friendliness we’re going to want the AI to know not to burn live humans for fuel. If we can’t do better an AI is too dangerous, with this back-up in place or not.
As for the condition of removing all energy and mass in a part of space not being sufficient to destroy all agents therein, I cannot see the error.
Well you could remove the agents and the mass surrounding them to some other location, intact.
The wavefunction argument is incorrect. At the level of quantum mechanics, particles’ wave-functions can easily be zero, trivially at points, with a little more effort over ranges. At the level of QFTs, yes vacuum fluctuations kick in, and do prevent space from being “empty”.
Regarding the condescending: “I say this...” I would have liked it more if you would have stated explicitly that your preference originates from a wish to further my learning. I have no business optimizing your value function. Anyway, I operate by Crocker’s Rules.
Not having heard your argument against “Describing …” yet, but assuming you believe some to exist, I estimate the chance of me still believing it after your argument at 0.6.
Now for guessing the two problems:
The first possible problem will be describing “mass” and “energy” to a system which basically only has sensor readings. However, if we can describe concepts like “human” or “freedom”, I expect descriptions of matter and energy to be simpler (even though 10.000 years ago, telling somebody about “humans” was easier than telling them about mass but that was not the same concept of “humans” we would actually like to describe). And for “mass” and “energy” the physicists already have at quite formal descriptions.
One other problem is that mass and energy might not be contained within a certain part of space, as per physics, it is just the probability of it having an effect outside some space going down to pretty much zero the greater the distance. Thus removing all energy and matter somewhere might produce subtle effects somewhere totally different . However I do expect these effects to be so subtle not even to matter to the AI because they become smaller than the local quantum noise for very short distances already.
Regarding the condescending: “I say this...” I would have liked it more if you would have stated explicitly that your preference originates from a wish to further my learning. I have no business optimizing your value function. Anyway, I operate by Crocker’s Rules.
I don’t know if I’m thinking about what Robin’s after but the statement at issue strikes me as giving neither necessary nor sufficient conditions for destroying agents in any given part of space. If I’m on the same page as him you’re overthinking it.
I fail to understand the sentence about overthinking. Mind to explain?
As for the condition of removing all energy and mass in a part of space not being sufficient to destroy all agents therein, I cannot see the error. Do you have an example of an agent which would continue to exist in those circumstances?
That the condition is not necessary is true: I can shoot you, you die. No need to remove much mass or energy from the part of space you occupy. However we don’t need a necessary condition, only a sufficient one.
Well yes we don’t need a necessary condition for your idea but presumably if we want to make even a passing attempt at friendliness we’re going to want the AI to know not to burn live humans for fuel. If we can’t do better an AI is too dangerous, with this back-up in place or not.
Well you could remove the agents and the mass surrounding them to some other location, intact.
This is what I was planning to say, yes. A third argument: removing all mass and energy from a volume is—strictly speaking—impossible.
Because a particle’s wave function never hits zero or some other reason?
I was thinking of vacuum energy, actually—the wavefunction argument just makes it worse.
The wavefunction argument is incorrect. At the level of quantum mechanics, particles’ wave-functions can easily be zero, trivially at points, with a little more effort over ranges. At the level of QFTs, yes vacuum fluctuations kick in, and do prevent space from being “empty”.
I apologize—that was, in fact, my intent.