We can model induction in a monistic fashion pretty well—although at the moment the models are somewhat lacking in advanced inductive capacity/compression abilities. The models are good enough to be built and actually work.
Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date—e.g. by liberally sprinkling aversive sensors around the creature’s brain. The argument that such approaches do not scale up is probably wrong—designers will always be smarter than the creatures they build—and will successfully find ways to avoid undesirable self-modifications. If there are limits, they are obviously well above the human level—since individual humans have very limited self-brain-surgery abilities. If this issue does prove to be a significant problem, we won’t have to solve it without superhuman machine intelligence.
The vision of an agent improving its own brain is probably wrong: once you have one machine intelligence, you will soon have many copies of it—and a society of intelligent machines. That’s the easiest way to scale up—as has been proved in biological systems again and again. Agents will be produced in factories run by many such creatures. No individual agent is likely to do much in the way of fundamental redesign on itself. Instead groups of agents will design the next generation of agent.
That still leaves the possibility of a totalitarian world government wireheading itself—or performing fatal experiments on itself. However, a farsighted organization would probably avoid such fates—in order to avoid eternal oblivion at the hands of less short-sighted aliens.
Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date—e.g. by liberally sprinkling aversive sensors around the creature’s brain
Band-aids as a solution to catastrophes require that we’re able to see all the catastrophes coming. Biology doesn’t care about letting species evolve to extinction, so it’s happy to rely on hacky post-hoc solutions. We do care about whether we go extinct, so we can’t just turn random AGIs loose on our world and worry about all the problems after they’ve arisen.
The argument that such approaches do not scale up is probably wrong—designers will always be smarter than the creatures they build
Odd comment marked in bold. Why do you think that?
and will successfully find ways to avoid undesirable self-modifications
I’m confused. Doesn’t this predict that no undesirable technology will ever be (or has ever been) invented, much less sold?
If this issue does prove to be a significant problem, we won’t have to solve it without superhuman machine intelligence.
We can’t rely on a superintelligence to provide solutions to problems that need to be solved as a prerequisite to creating an SI it’s safe to ask for help on that class of solutions. Not every buck can be passed to the SI.
The vision of an agent improving its own brain is probably wrong
What about the vision of an agent improving on its design and then creating the new model of itself? Are you claiming that there will never be AIs used to program improved AIs?
once you have one machine intelligence, you will soon have many copies of it—and a society of intelligent machines
Because any feasible AI will want to self-replicate? Or because its designers will desire a bunch of copies?
What’s the relevant difference between a society of intelligent machines, and a singular intelligent machine with a highly modular reasoning and decision-making architecture? I.e., why did you bring up the ‘society’ topic in the first place?
However, a farsighted organization would probably avoid such fates—in order to avoid eternal oblivion at the hands of less short-sighted aliens.
I’m not seeing it. If wireheading is plausible, then it’s equally plausible given an alien-fearing government, since wireheading the human race needn’t get in the way of putting a smart AI in charge of neutralizing potential alien threats. Direct human involvement won’t always be a requirement.
why did you bring up the ‘society’ topic in the first place?
A society leads to a structure with advantages of power and intelligence over individuals. It means that we’ll always be able to restrain agents in test harnesses, for instance. It means that the designers will be smarter than the designed—via collective intelligence. If the the designers are smarter than the designed, maybe they’ll be able to stop them from wireheading themselves.
If wireheading is plausible, then it’s equally plausible given an alien-fearing government, since wireheading the human race needn’t get in the way of putting a smart AI in charge of neutralizing potential alien threats.
What I was talking about was “the possibility of a totalitarian world government wireheading itself”. The government wireheading itself isn’t really the same as humans wireheading. However, probably any wireheading increases the chances of being wiped out by less-stupid aliens. Optimizing for happiness and optimizing for survival aren’t really the same thing. As Grove said, only the paranoid survive.
We can model induction in a monistic fashion pretty well—although at the moment the models are somewhat lacking in advanced inductive capacity/compression abilities. The models are good enough to be built and actually work.
Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date—e.g. by liberally sprinkling aversive sensors around the creature’s brain. The argument that such approaches do not scale up is probably wrong—designers will always be smarter than the creatures they build—and will successfully find ways to avoid undesirable self-modifications. If there are limits, they are obviously well above the human level—since individual humans have very limited self-brain-surgery abilities. If this issue does prove to be a significant problem, we won’t have to solve it without superhuman machine intelligence.
The vision of an agent improving its own brain is probably wrong: once you have one machine intelligence, you will soon have many copies of it—and a society of intelligent machines. That’s the easiest way to scale up—as has been proved in biological systems again and again. Agents will be produced in factories run by many such creatures. No individual agent is likely to do much in the way of fundamental redesign on itself. Instead groups of agents will design the next generation of agent.
That still leaves the possibility of a totalitarian world government wireheading itself—or performing fatal experiments on itself. However, a farsighted organization would probably avoid such fates—in order to avoid eternal oblivion at the hands of less short-sighted aliens.
Band-aids as a solution to catastrophes require that we’re able to see all the catastrophes coming. Biology doesn’t care about letting species evolve to extinction, so it’s happy to rely on hacky post-hoc solutions. We do care about whether we go extinct, so we can’t just turn random AGIs loose on our world and worry about all the problems after they’ve arisen.
Odd comment marked in bold. Why do you think that?
I’m confused. Doesn’t this predict that no undesirable technology will ever be (or has ever been) invented, much less sold?
We can’t rely on a superintelligence to provide solutions to problems that need to be solved as a prerequisite to creating an SI it’s safe to ask for help on that class of solutions. Not every buck can be passed to the SI.
What about the vision of an agent improving on its design and then creating the new model of itself? Are you claiming that there will never be AIs used to program improved AIs?
Because any feasible AI will want to self-replicate? Or because its designers will desire a bunch of copies?
What’s the relevant difference between a society of intelligent machines, and a singular intelligent machine with a highly modular reasoning and decision-making architecture? I.e., why did you bring up the ‘society’ topic in the first place?
I’m not seeing it. If wireheading is plausible, then it’s equally plausible given an alien-fearing government, since wireheading the human race needn’t get in the way of putting a smart AI in charge of neutralizing potential alien threats. Direct human involvement won’t always be a requirement.
A society leads to a structure with advantages of power and intelligence over individuals. It means that we’ll always be able to restrain agents in test harnesses, for instance. It means that the designers will be smarter than the designed—via collective intelligence. If the the designers are smarter than the designed, maybe they’ll be able to stop them from wireheading themselves.
What I was talking about was “the possibility of a totalitarian world government wireheading itself”. The government wireheading itself isn’t really the same as humans wireheading. However, probably any wireheading increases the chances of being wiped out by less-stupid aliens. Optimizing for happiness and optimizing for survival aren’t really the same thing. As Grove said, only the paranoid survive.