When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so
What’s the difference from, say, a car assembly line robot?
Car assembly robots have a pre-programmed routine they strictly follow. They have no learning algorithms, and usually no decision-making algorithms either. Different programs do different things!
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Notice the lack of domain-general intelligence in that robot, and—on the other side—all the pre-programmed safety features it has that a mc-aixi robot would lack. Narrow AI is naturally a lot easier to reason about and build safety into. What I’m trying to stress here is the physical ability to harm people, combined with the domain-general intelligence to do it on purpose*, in the face of attempts to stop it or escape.
Different programs indeed do different things.
* (Where “purpose” includes “what the robot thought would be useful” but does not necessarily include “what the designers intended it to do”.)
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Oh, ok. I see your point there.
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
I probably do, but I still think it’s worth emphasizing the particular properties of particular algorithms rather than letting people form models in their heads that say Certain Programs Are Magic And Will Do Magic Things.
What’s the difference from, say, a car assembly line robot?
Car assembly robots have a pre-programmed routine they strictly follow. They have no learning algorithms, and usually no decision-making algorithms either. Different programs do different things!
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Notice the lack of domain-general intelligence in that robot, and—on the other side—all the pre-programmed safety features it has that a mc-aixi robot would lack. Narrow AI is naturally a lot easier to reason about and build safety into. What I’m trying to stress here is the physical ability to harm people, combined with the domain-general intelligence to do it on purpose*, in the face of attempts to stop it or escape.
Different programs indeed do different things.
* (Where “purpose” includes “what the robot thought would be useful” but does not necessarily include “what the designers intended it to do”.)
Nobody has bothered putting safety features into AIXI because it is so constrained by resources, but if you wanted to, it’s eminently boxable.
Oh, ok. I see your point there.
I probably do, but I still think it’s worth emphasizing the particular properties of particular algorithms rather than letting people form models in their heads that say Certain Programs Are Magic And Will Do Magic Things.
looks to me like a straightforward consequence of the Clarke’s Third Law :-)
As an aside, I don’t expect attempts to let or not let people form models in their heads to be successful :-/