Bostrom makes an interesting point that open literature tends to benefit small teams relative to large ones.
Also, he points out that it is easier for a government to nationalise the most successful projects than to ban all projects. The most successful projects are likely to have significant funding, hire well-known experts, go to conferences, etc. Small basement projects, on the other hand, could much more easily escape detection. Unfortunately, this is a pro-arms-race dynamic: it is easy for governments to try to outrace small teams with their own teams than to ban development.
The point about countries defecting from global collaborations is also interesting. It’s relatively hard to defect on a nuclear-sharing project: you actually need to build your own reactors, acquire fissile material, etc. But running your own AI project might only requires you to steal some thumb drives.
The “Monitoring” subsection. Bostrom makes good points about why state actors would be likely to seek to monopolize AGI development, if they took it to be a threat to their power monopolies. I hadn’t given the possibility sufficient consideration until this point. These actors do seem disinterested so far, but (as Bostrom points out) that might well be a pretense, and even if not it doesn’t mean they’ll remain so for too long to matter.
What did you find most interesting this week?
Bostrom makes an interesting point that open literature tends to benefit small teams relative to large ones.
Also, he points out that it is easier for a government to nationalise the most successful projects than to ban all projects. The most successful projects are likely to have significant funding, hire well-known experts, go to conferences, etc. Small basement projects, on the other hand, could much more easily escape detection. Unfortunately, this is a pro-arms-race dynamic: it is easy for governments to try to outrace small teams with their own teams than to ban development.
The point about countries defecting from global collaborations is also interesting. It’s relatively hard to defect on a nuclear-sharing project: you actually need to build your own reactors, acquire fissile material, etc. But running your own AI project might only requires you to steal some thumb drives.
The “Monitoring” subsection. Bostrom makes good points about why state actors would be likely to seek to monopolize AGI development, if they took it to be a threat to their power monopolies. I hadn’t given the possibility sufficient consideration until this point. These actors do seem disinterested so far, but (as Bostrom points out) that might well be a pretense, and even if not it doesn’t mean they’ll remain so for too long to matter.