It doesn’t sound like particularly common sense—I’d guess that significantly less than half of humans would arrive at that as a cached ‘common sense’ solution.
that is very difficult to derive rationally.
It’s utterly trivial application of instrumental rationality. I can come up with it in 2 seconds. If the AI is as smart as I am (and with far less human biases) it can arrive at the solution as simply as I can. Especially after it reads every book on strategy that humans have written. Heck, it can read my comment and then decide whether it is a good strategy.
Artificial intelligences aren’t stupid.
Just a little more anthropomorphizing and we’ll be speaking of AI that just knows what is the moral thing to do, innately, because he’s such a good guy.
Or… not. That’s utter nonsense. We have been explicitly describing AIs that have been programmed with terminal goals. The AI would then
The ‘basic analysis of what my humans seem to want’ has fairly creepy overtones to it (testing hypotheses style). On top of it, say, you tell it, okay just do what ever you think i would do if I thought faster, and it obliges, you are vaporized, because you would of gotten bored into suicide if you thought faster, your simple values system works like this. What’s exactly wrong about that course of action? I don’t think ‘extrapolating’ is well defined.
CEV is well enough defined that it just wouldn’t do that unless you actually do want it—in which case you, well, want it to do that so have no cause to complain. Reading even the incomplete specification from 2004 is sufficient to tell us that a GAI that does that is not implementing something that can reasonably called CEV. I must conclude that you are replying to a straw man (presumably due to not having actually read the materials you criticise.)
CEV is not defined to do what you as-is actually want, but to do what you would of wanted, even in circumstances when you as-is actually want something else, as the 2004 paper cheerfully explains.
In any case, once you assume such intent-understanding interpretative powers of AI, it’s hard to demonstrate why instructing the AI in plain English to “Be a good guy. Don’t do bad things” would not be a better shot.
Programmed in with great effort, thousands of hours of research and development and even then great chance of failure. That isn’t “assumption”.
Universe doesn’t grade ‘for effort’.
That would seem to be a failure of imagination.
That’s how the pro CEV argument seem to me.
That exhortation tells even an FAI-complete AI that is designed to follow commands to do very little.
When you are a very good engineer you can work around constraint more and more. For example, right now using the resources that the AI can conceivably command without technically innovating, improving situation with the hunger in Africa will involve drastic social change with some people getting shot. Some slightly superhuman technical innovation, and this can be done without hurting anyone. We humans are barely-able engineers and scientists; we got this technical civilization once we got just barely able to do that.
And that is enough non-sequiturs for one conversation. My comment in no way implied that it does, nor did it rely on it for the point was making. It even went as far as to explicitly declare likely failure.
You seem to be pattern matching from keywords to whatever retort you think counters them. This makes the flow of the conversation entirely incoherent and largely pointless.
When you are a very good engineer you can work around constraint more and more. For example, right now using the resources that the AI can conceivably command without technically innovating, improving situation with the hunger in Africa will involve drastic social change with some people getting shot. Some slightly superhuman technical innovation, and this can be done without hurting anyone. We humans are barely-able engineers and scientists; we got this technical civilization once we got just barely able to do that.
This is both true and entirely orthogonal to that which it seems intended to refute.
It doesn’t sound like particularly common sense—I’d guess that significantly less than half of humans would arrive at that as a cached ‘common sense’ solution.
It’s utterly trivial application of instrumental rationality. I can come up with it in 2 seconds. If the AI is as smart as I am (and with far less human biases) it can arrive at the solution as simply as I can. Especially after it reads every book on strategy that humans have written. Heck, it can read my comment and then decide whether it is a good strategy.
Artificial intelligences aren’t stupid.
Or… not. That’s utter nonsense. We have been explicitly describing AIs that have been programmed with terminal goals. The AI would then
CEV is well enough defined that it just wouldn’t do that unless you actually do want it—in which case you, well, want it to do that so have no cause to complain. Reading even the incomplete specification from 2004 is sufficient to tell us that a GAI that does that is not implementing something that can reasonably called CEV. I must conclude that you are replying to a straw man (presumably due to not having actually read the materials you criticise.)
CEV is not defined to do what you as-is actually want, but to do what you would of wanted, even in circumstances when you as-is actually want something else, as the 2004 paper cheerfully explains.
In any case, once you assume such intent-understanding interpretative powers of AI, it’s hard to demonstrate why instructing the AI in plain English to “Be a good guy. Don’t do bad things” would not be a better shot.
Programmed in with great effort, thousands of hours of research and development and even then great chance of failure. That isn’t “assumption”.
That would seem to be a failure of imagination. That exhortation tells even an FAI-complete AI that is designed to follow commands to do very little.
Universe doesn’t grade ‘for effort’.
That’s how the pro CEV argument seem to me.
When you are a very good engineer you can work around constraint more and more. For example, right now using the resources that the AI can conceivably command without technically innovating, improving situation with the hunger in Africa will involve drastic social change with some people getting shot. Some slightly superhuman technical innovation, and this can be done without hurting anyone. We humans are barely-able engineers and scientists; we got this technical civilization once we got just barely able to do that.
And that is enough non-sequiturs for one conversation. My comment in no way implied that it does, nor did it rely on it for the point was making. It even went as far as to explicitly declare likely failure.
You seem to be pattern matching from keywords to whatever retort you think counters them. This makes the flow of the conversation entirely incoherent and largely pointless.
This is both true and entirely orthogonal to that which it seems intended to refute.