Wouldn’t poor programming make grey goo a type of unfriendly A.I.? If so, then that would justify leaving it out of this description, as nanotechnology would just facilitate the issue. The computer commanding the nanobots would be the core problem.
Some of this is terminology, used with intent to narrow the topic—when Eliezer talks about FAI and uFAI, he’s mostly talking about potential Artificial General Intelligence. A nano-machine that makes additional copies of itself as part of its programming is not necessarily a General Intelligence. Most of the predicted uses of nano-machines wouldn’t require (or be designed to have) general intelligence.
I’m very aware that giving a terminological answer conceals that there is no agreement on what is or isn’t “General Intelligence.” About all we can agree on is that human intelligence is the archetype.
To put it slightly differently, one could argue that the laptop I’m using right now is a kind of Intelligence. And it’s clearly Artificial. But conversations about Friendly and unFriendly aren’t really about my laptop.
Fair enough, but the grey goo issue is still probably based enough in programming to categorize it separately from the direct implications of nanotechnological production.
Eh, I guess. I’m not a big fan of worrying about the consequences of something that both (a) works exactly as intended and (b) makes us richer.
So I think it is conflating problems to worry about weapon production and the general man’s-inhumanity-to-man problem when the topic is nanotechnology.
More importantly, the exact issue I’m worried about (poor programming of something powerful and barely under human control that has nothing to do with figuring out human morality) seems like it is going to be skipped.
Wouldn’t poor programming make grey goo a type of unfriendly A.I.? If so, then that would justify leaving it out of this description, as nanotechnology would just facilitate the issue. The computer commanding the nanobots would be the core problem.
Some of this is terminology, used with intent to narrow the topic—when Eliezer talks about FAI and uFAI, he’s mostly talking about potential Artificial General Intelligence. A nano-machine that makes additional copies of itself as part of its programming is not necessarily a General Intelligence. Most of the predicted uses of nano-machines wouldn’t require (or be designed to have) general intelligence.
I’m very aware that giving a terminological answer conceals that there is no agreement on what is or isn’t “General Intelligence.” About all we can agree on is that human intelligence is the archetype.
To put it slightly differently, one could argue that the laptop I’m using right now is a kind of Intelligence. And it’s clearly Artificial. But conversations about Friendly and unFriendly aren’t really about my laptop.
Fair enough, but the grey goo issue is still probably based enough in programming to categorize it separately from the direct implications of nanotechnological production.
Eh, I guess. I’m not a big fan of worrying about the consequences of something that both (a) works exactly as intended and (b) makes us richer.
So I think it is conflating problems to worry about weapon production and the general man’s-inhumanity-to-man problem when the topic is nanotechnology.
More importantly, the exact issue I’m worried about (poor programming of something powerful and barely under human control that has nothing to do with figuring out human morality) seems like it is going to be skipped.