We want to avoid ubiquitous surveillance, or minimize its impact. If there exists a sufficiently dangerous technology, that leaves you two choices.
You can do what surveillance and enforcement is necessary to limit access.
You can do what surveillance and enforcement is necessary to contain usage.
Which of these will violate freedom less? My strong prediction for AGI is the first one.
While I certainly agree that option 1 seems much better, I don’t see how we can maintain option 1 for long. A few years perhaps, but the knowledge that powerful AI is possible and obtainable with some set of algorithms which are improvements over published algorithms and consumer-grade technology… that seems sufficient for smart selfish actors to rationalize themselves into justifying secretly trying to reverse engineer their way to these better algorithms. And that scenario requires option 2 for things to not go wrong if they succeed. This would be an easier problem if we were hardware constrained, but I don’t think we are, and I think we will be even less hardware constrained in the future as effective compute prices continue to fall. There are a LOT of small privately-owned datacenters out there. If a basement-bitcoin-miner’s setup is enough to be a dangerous supply of compute, then option 2 seems like the only stable one.
Edit: I think Zvi puts the problem well in this section:
The third problem is the competitive and evolutionary, the dynamics and equilibrium of a world with many ASIs (artificial superintelligences) in it.
This is a world almost no one is making any serious attempt to think about or model, and those who have (such as fiction writers) almost always end up using hand waves or absurdities and presenting worlds highly out of equilibrium.
We will be creating something smarter and more capable and better at optimization than ourselves, that many people will have strong incentives both economic and ideological to make into various agents with various goals including reproduction and resource acquisition. Why should we expect to long be in charge, or even to survive?
So what then leads to the difference in our views in the first quote? I suppose a question about whether the hardware sufficient for dangerous ASI will be broadly distributed? Please correct me if I’m confused here.
My view is that ‘scale is sufficient, but not necessary. Either larger scale, OR secret sauce algorithmic improvements will lead to sufficiently capable AI that it can initiate recursive self-improvement which will be capable of scaling to ASI if run unchecked.’
Humanity has a track record of smart groups of engineers noticing that some other group has achieved a specific technology, and then figuring out how to replicate that. We really shouldn’t count on that NOT happening in this case, when so much is on the line. Simply knowing that some other group solved it, and what tools they used, and the background knowledge that they went into their research project with… that’s enough to solve it independently a few years later.
While I certainly agree that option 1 seems much better, I don’t see how we can maintain option 1 for long. A few years perhaps, but the knowledge that powerful AI is possible and obtainable with some set of algorithms which are improvements over published algorithms and consumer-grade technology… that seems sufficient for smart selfish actors to rationalize themselves into justifying secretly trying to reverse engineer their way to these better algorithms. And that scenario requires option 2 for things to not go wrong if they succeed. This would be an easier problem if we were hardware constrained, but I don’t think we are, and I think we will be even less hardware constrained in the future as effective compute prices continue to fall. There are a LOT of small privately-owned datacenters out there. If a basement-bitcoin-miner’s setup is enough to be a dangerous supply of compute, then option 2 seems like the only stable one.
Edit: I think Zvi puts the problem well in this section:
So what then leads to the difference in our views in the first quote? I suppose a question about whether the hardware sufficient for dangerous ASI will be broadly distributed? Please correct me if I’m confused here.
My view is that ‘scale is sufficient, but not necessary. Either larger scale, OR secret sauce algorithmic improvements will lead to sufficiently capable AI that it can initiate recursive self-improvement which will be capable of scaling to ASI if run unchecked.’
Humanity has a track record of smart groups of engineers noticing that some other group has achieved a specific technology, and then figuring out how to replicate that. We really shouldn’t count on that NOT happening in this case, when so much is on the line. Simply knowing that some other group solved it, and what tools they used, and the background knowledge that they went into their research project with… that’s enough to solve it independently a few years later.