FAI problems are AGI problems, they are simply a particular kind and style of AGI problem in which large sections of the solution space have been crossed out as unstable.
Ok, but this doesn’t change my point: you’re just one small group out of many around the world doing AI research, and you’re trying to solve an even harder version of the problem while using fewer of the available methods. These factors alone make it unlikely that you’ll be the ones to get there first. If this correct, then your work is unlikely to affect the future of humanity.
Valdimir,
Outcompeting other risks only becomes relevant when you can provide a better outcome.
Yes, but that might not be all that hard. Most AI researchers I talk to about AGI safety think the idea is nuts—even the ones who believe that super intelligent machines will exist in a few decades. If somebody is going to set off a super intelligent machine I’d rather it was a machine that will only probably kill us, rather than a machine that almost certainly will kill us because issues of safety haven’t even been considered.
If I had to sum up my position it would be: maximise the safety of the first powerful AGI, because that’s likely to be the one that matters. Provably safe theoretical AGI designs aren’t going to matter much to us if we’re already dead.
If somebody is going to set off a super intelligent machine I’d rather it was a machine that will only probably kill us, rather than a machine that almost certainly will kill us because issues of safety haven’t even been considered.
A plausible problem is server-side machine intelligence collecting the world’s wealth, and then distributing it very unevenly—which could cause political problems and unrest. Patent and copyright laws make this kind of problem worse. I think that sort of scenario is much more likely than a bug causing an accidental takeover of the world.
Most AI researchers I talk to about AGI safety think the idea is nuts—even the ones who believe that super intelligent machines will exist in a few decades.
The idea that machines will turn against society and destroy civilization is pretty “out there”. Too many SF movies at a young age—perhaps.
The idea that machines will have an ethical dimension is pretty mainstream, though—thanks in no small part to Asimov.
Eli,
FAI problems are AGI problems, they are simply a particular kind and style of AGI problem in which large sections of the solution space have been crossed out as unstable.
Ok, but this doesn’t change my point: you’re just one small group out of many around the world doing AI research, and you’re trying to solve an even harder version of the problem while using fewer of the available methods. These factors alone make it unlikely that you’ll be the ones to get there first. If this correct, then your work is unlikely to affect the future of humanity.
Valdimir,
Outcompeting other risks only becomes relevant when you can provide a better outcome.
Yes, but that might not be all that hard. Most AI researchers I talk to about AGI safety think the idea is nuts—even the ones who believe that super intelligent machines will exist in a few decades. If somebody is going to set off a super intelligent machine I’d rather it was a machine that will only probably kill us, rather than a machine that almost certainly will kill us because issues of safety haven’t even been considered.
If I had to sum up my position it would be: maximise the safety of the first powerful AGI, because that’s likely to be the one that matters. Provably safe theoretical AGI designs aren’t going to matter much to us if we’re already dead.
A plausible problem is server-side machine intelligence collecting the world’s wealth, and then distributing it very unevenly—which could cause political problems and unrest. Patent and copyright laws make this kind of problem worse. I think that sort of scenario is much more likely than a bug causing an accidental takeover of the world.
The idea that machines will turn against society and destroy civilization is pretty “out there”. Too many SF movies at a young age—perhaps.
The idea that machines will have an ethical dimension is pretty mainstream, though—thanks in no small part to Asimov.