I can’t think of a way of fitting a forest fire into this model either, which suggests it isn’t useful to think of forest fires under this paradigm.
Forest fires are definitely OPs under my intuitive concept. They consistently select a subset of possible future (burnt forests). They’re probably something like chemical energy minimizers; if I were to measure their efficacy, it would be something like number of carbon-based molecules turned into CO2. But the only reason we can come up with semi-formal measures like CO2 molecules or output on wires is because we’re smart human-things. I want to figure out how to algorithmically measure it.
Isn’t the crux of the decision-making process pretending that you could choose any of your options, even though, as a matter of fact, you will choose one?
Yes. But what does “could” mean? It doesn’t mean that you they all have equal probability. If literally all you know is that there are n outputs, then giving them 1/n weight is correct. But we usually know more, like the fact that it’s an AI, and it’s unclear how to update on this.
Are you saying that an “AI” outputting random noise could do worse than an “AI” with optimization power measured at zero (i.e. zero intelligence)?
Absolutely. Like how random outputs of a car cause it to jerk around and hit things, whereas a zero-capability car just sits there. Also, we’re averaging over all possible outputs with equal weights. Even if most outputs are neutral or harmless, there are usually more damaging outputs than good ones. It’s generally easier to harm than destroy. The more powerful actuators the AI has, the most damage random outputs will do.
Oops, looks like I was wrong about what you meant (ignore the edit). But yes, if you give a stupid thing lots of power you should expect bad outcomes. A car directed with zero intelligence is not a car sitting still, but precisely what you said was dangerous: a car having its controls blindly fiddled with. But if you just run a stupid program on a computer, it will never acquire power in the first place. Most decisions are neutral, unless they just happen to be plugged into something that has already been optimized to have large physical effects (like a bulldozer). Of those decisions that do have large effects, most will be destructive, but that’s exactly what we should expect from a stupid optimization process acting on something that has already been finely honed by a smart optimization process.
what does “could” mean?
Good question. I think it has something to do with simply defining some set of actions to be your “options”, and temporarily putting all your options on an equal footing, so that you end up with the one with the best consequences, rather than the one that seemed like the one you’d be most likely to choose. I don’t think it even has much to do with probabilities, because then you run into self-fulfilling prophesies—doing what you predicted you’d do, thereby justifying the prediction.
In this case, we want to measure how good an agent did, relative to how it could have done. That is, how good were the consequences of the option it chose, relative to its other options. I don’t see any reason to weight those options according to a probability distribution, unless you know what “half an option” means. And choosing a distribution poses huge problems. After all, we know the agent chose one of the options with probability 1.0, and all the others with probability 0.0.
Forest fires are definitely OPs under my intuitive concept. They consistently select a subset of possible future (burnt forests).
Well, you could just compare the rate of oxidation under a flame, to the average rate of oxidation of all surfaces (including those that happen to be on fire) within whichever reference class you prefer. (I think choosing a reference class (set of options) is just part of how you define the OP. And you just define the OP whichever way helps you understand the world best.)
Thanks for all your comments!
Is this actually helpful? I try to read up on the background for this stuff, but I never know if I’m just rehashing what’s already been discussed, and if so, whether reviewing that here would be useful to anyone.
Forest fires are definitely OPs under my intuitive concept. They consistently select a subset of possible future (burnt forests). They’re probably something like chemical energy minimizers; if I were to measure their efficacy, it would be something like number of carbon-based molecules turned into CO2. But the only reason we can come up with semi-formal measures like CO2 molecules or output on wires is because we’re smart human-things. I want to figure out how to algorithmically measure it.
Yes. But what does “could” mean? It doesn’t mean that you they all have equal probability. If literally all you know is that there are n outputs, then giving them 1/n weight is correct. But we usually know more, like the fact that it’s an AI, and it’s unclear how to update on this.
Absolutely. Like how random outputs of a car cause it to jerk around and hit things, whereas a zero-capability car just sits there. Also, we’re averaging over all possible outputs with equal weights. Even if most outputs are neutral or harmless, there are usually more damaging outputs than good ones. It’s generally easier to harm than destroy. The more powerful actuators the AI has, the most damage random outputs will do.
Thanks for all your comments!
Oops, looks like I was wrong about what you meant (ignore the edit). But yes, if you give a stupid thing lots of power you should expect bad outcomes. A car directed with zero intelligence is not a car sitting still, but precisely what you said was dangerous: a car having its controls blindly fiddled with. But if you just run a stupid program on a computer, it will never acquire power in the first place. Most decisions are neutral, unless they just happen to be plugged into something that has already been optimized to have large physical effects (like a bulldozer). Of those decisions that do have large effects, most will be destructive, but that’s exactly what we should expect from a stupid optimization process acting on something that has already been finely honed by a smart optimization process.
Good question. I think it has something to do with simply defining some set of actions to be your “options”, and temporarily putting all your options on an equal footing, so that you end up with the one with the best consequences, rather than the one that seemed like the one you’d be most likely to choose. I don’t think it even has much to do with probabilities, because then you run into self-fulfilling prophesies—doing what you predicted you’d do, thereby justifying the prediction.
In this case, we want to measure how good an agent did, relative to how it could have done. That is, how good were the consequences of the option it chose, relative to its other options. I don’t see any reason to weight those options according to a probability distribution, unless you know what “half an option” means. And choosing a distribution poses huge problems. After all, we know the agent chose one of the options with probability 1.0, and all the others with probability 0.0.
Well, you could just compare the rate of oxidation under a flame, to the average rate of oxidation of all surfaces (including those that happen to be on fire) within whichever reference class you prefer. (I think choosing a reference class (set of options) is just part of how you define the OP. And you just define the OP whichever way helps you understand the world best.)
Is this actually helpful? I try to read up on the background for this stuff, but I never know if I’m just rehashing what’s already been discussed, and if so, whether reviewing that here would be useful to anyone.