Unfortunately. both of these problems are only really technical ones
only?
lifting mc-aixi on an average laptop from “wins at pacman” to “wins at robot wars” which is about the level at which it may start posing a threat to human safety.
Mc-aixi is not going to win at something as open ended as robot wars just by replacing CTW or CTS with something better. And anyway, even if it did, it wouldn’t be about the level at which it may start posing a threat to human safety. Do you think that the human robot wars champions a threat to human safety? Are they even at the level of taking over the world? I don’t think so.
When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so (only hopefully sufficient anti-harm conditioning, if its programmers thought that far ahead).
Ah yes, but in this sense, cars, trains, knives, etc., also can certainly hurt people, and will generally have no moral compunctions about doing so. What’s special about robot wars-winning AIs?
Well, we’re awfully far from that. Automated programming is complete crap, automatic engineering is quite cool but its practical tools, it’s not a power fantasy where you make some simple software with surprisingly little effort and then it does it all for you.
Well, historically, first, certain someone had a simple power fantasy: come up with AI somehow and then it’ll just do everything. Then there was a heroic power fantasy: the others (who actually wrote some useful software and thus generally had easier time getting funding than our fantasist) are actually villains about to kill everyone, and our fantasist would save the world.
When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so
What’s the difference from, say, a car assembly line robot?
Car assembly robots have a pre-programmed routine they strictly follow. They have no learning algorithms, and usually no decision-making algorithms either. Different programs do different things!
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Notice the lack of domain-general intelligence in that robot, and—on the other side—all the pre-programmed safety features it has that a mc-aixi robot would lack. Narrow AI is naturally a lot easier to reason about and build safety into. What I’m trying to stress here is the physical ability to harm people, combined with the domain-general intelligence to do it on purpose*, in the face of attempts to stop it or escape.
Different programs indeed do different things.
* (Where “purpose” includes “what the robot thought would be useful” but does not necessarily include “what the designers intended it to do”.)
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Oh, ok. I see your point there.
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
I probably do, but I still think it’s worth emphasizing the particular properties of particular algorithms rather than letting people form models in their heads that say Certain Programs Are Magic And Will Do Magic Things.
The question of what would happen if human brains were copy-able seems like a tangent from the discussion at hand, viz, what would happen if an there existed an AI that was capable of winning Robot Wars while running on a laptop.
only?
Mc-aixi is not going to win at something as open ended as robot wars just by replacing CTW or CTS with something better.
And anyway, even if it did, it wouldn’t be about the level at which it may start posing a threat to human safety. Do you think that the human robot wars champions a threat to human safety? Are they even at the level of taking over the world? I don’t think so.
When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so (only hopefully sufficient anti-harm conditioning, if its programmers thought that far ahead).
Ah yes, but in this sense, cars, trains, knives, etc., also can certainly hurt people, and will generally have no moral compunctions about doing so.
What’s special about robot wars-winning AIs?
Domain-general intelligence, presumably.
Most basic pathfinding plus being a spinner (Hypnodisk-style) = win vs most non spinners.
I took “winning at Robot Wars” to include the task of designing the robot that competes. Perhaps nshepperd only meant piloting, though...
Well, we’re awfully far from that. Automated programming is complete crap, automatic engineering is quite cool but its practical tools, it’s not a power fantasy where you make some simple software with surprisingly little effort and then it does it all for you.
You call it a “power fantasy”—it’s actually more of a nightmare fantasy.
Well, historically, first, certain someone had a simple power fantasy: come up with AI somehow and then it’ll just do everything. Then there was a heroic power fantasy: the others (who actually wrote some useful software and thus generally had easier time getting funding than our fantasist) are actually villains about to kill everyone, and our fantasist would save the world.
What’s the difference from, say, a car assembly line robot?
Car assembly robots have a pre-programmed routine they strictly follow. They have no learning algorithms, and usually no decision-making algorithms either. Different programs do different things!
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Notice the lack of domain-general intelligence in that robot, and—on the other side—all the pre-programmed safety features it has that a mc-aixi robot would lack. Narrow AI is naturally a lot easier to reason about and build safety into. What I’m trying to stress here is the physical ability to harm people, combined with the domain-general intelligence to do it on purpose*, in the face of attempts to stop it or escape.
Different programs indeed do different things.
* (Where “purpose” includes “what the robot thought would be useful” but does not necessarily include “what the designers intended it to do”.)
Nobody has bothered putting safety features into AIXI because it is so constrained by resources, but if you wanted to, it’s eminently boxable.
Oh, ok. I see your point there.
I probably do, but I still think it’s worth emphasizing the particular properties of particular algorithms rather than letting people form models in their heads that say Certain Programs Are Magic And Will Do Magic Things.
looks to me like a straightforward consequence of the Clarke’s Third Law :-)
As an aside, I don’t expect attempts to let or not let people form models in their heads to be successful :-/
One such champion isn’t much of a threat, but only because human brains aren’t copy-able.
And if they were?
The question of what would happen if human brains were copy-able seems like a tangent from the discussion at hand, viz, what would happen if an there existed an AI that was capable of winning Robot Wars while running on a laptop.