So far as I can tell, the most likely reason we wouldn’t get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.
I’m curious that you seem to think the former problem is harder or less likely to be solved than the latter. I’ve been thinking the opposite, and one reason is that the latter problem seems more philosophical and the former more technical, and humanity seems to have a lot of technical talent that we can eventually recruit to do FAI research, but much less untapped philosophical talent.
Also as another side note, I don’t think we should be focusing purely on the “we come up with a value-stable architecture and then the FAI will make a billion self-modifications within the same general architecture” scenario. Another possibility might be that we don’t solve the stable self-improvement problem at all, but instead solve the value transfer problem in a general enough way that the FAI we build immediately creates an entirely new architecture for the next generation FAI and transfer its values to its creation using our solution, and this happens just a few times. (The FAI doesn’t try to make a billion self-modifications to itself because, just like us, it knows that it doesn’t know how to safely do that.) (Cousin_it make a similar comment earlier.)
In all I can see three arguments for prioritizing the value transfer problem over the stable self-improvement problem: 1) the former seems harder so we need to get started on it earlier; 2) we know the former definitely needs to be solved whereas the latter may not need to be; 3) the former involves work that’s less useful for building UFAI.
(On the main topic of the post, I’ve been assuming, without having put too much thought into it, that slower economic growth is good for eventually getting FAI. Now after reading the discussions here and on Facebook I realize that I haven’t put enough thought into it and perhaps should be less certain about it than I was.)
On the main topic of the post, I’ve been assuming, without having put too much thought into it, that slower economic growth is good for eventually getting FAI.
Here’s an attempt to verbalize why I think this, which is a bit different from Eliezer’s argument (which I also buy to some extent). First I think UFAI is much easier than FAI and we are putting more resources into the former than the latter. To put this into numbers for clarity, let’s say UFAI takes 1000 units of work, and FAI takes 2000 units of work, and we’re currently putting 10 units of work into UFAI per year, and only 1 unit of work per year into FAI. If we had a completely stagnant economy, with 0% growth, we’d have 100 years to do something about this, or for something to happen to change this, before it’s too late. If the economy was instead growing at 5% per year, and this increased both UFAI and FAI work by 5% per year, the window of time “for something to happen” shrinks to about 35 years. The economic growth might increase the probability per year of “something happening” but it doesn’t seem like it would be enough to compensate for the shortened timeline.
Also: Many likely reasons for something to happen about this center around, in appropriate generality, the rationalist!EA movement. This movement is growing at a higher exponent than current economic growth.
I think this is the strongest single argument that economic growth might currently be bad. However even then what matters is the elasticity of movement growth rates with economic growth rates. I don’t know how we can measure this; I expect it’s positive and less than one, but I’m rather more confident about that lower bound than the upper bound.
I’m curious that you seem to think the former problem is harder or less likely to be solved than the latter. I’ve been thinking the opposite, and one reason is that the latter problem seems more philosophical and the former more technical, and humanity seems to have a lot of technical talent that we can eventually recruit to do FAI research, but much less untapped philosophical talent.
Also as another side note, I don’t think we should be focusing purely on the “we come up with a value-stable architecture and then the FAI will make a billion self-modifications within the same general architecture” scenario. Another possibility might be that we don’t solve the stable self-improvement problem at all, but instead solve the value transfer problem in a general enough way that the FAI we build immediately creates an entirely new architecture for the next generation FAI and transfer its values to its creation using our solution, and this happens just a few times. (The FAI doesn’t try to make a billion self-modifications to itself because, just like us, it knows that it doesn’t know how to safely do that.) (Cousin_it make a similar comment earlier.)
In all I can see three arguments for prioritizing the value transfer problem over the stable self-improvement problem: 1) the former seems harder so we need to get started on it earlier; 2) we know the former definitely needs to be solved whereas the latter may not need to be; 3) the former involves work that’s less useful for building UFAI.
(On the main topic of the post, I’ve been assuming, without having put too much thought into it, that slower economic growth is good for eventually getting FAI. Now after reading the discussions here and on Facebook I realize that I haven’t put enough thought into it and perhaps should be less certain about it than I was.)
Here’s an attempt to verbalize why I think this, which is a bit different from Eliezer’s argument (which I also buy to some extent). First I think UFAI is much easier than FAI and we are putting more resources into the former than the latter. To put this into numbers for clarity, let’s say UFAI takes 1000 units of work, and FAI takes 2000 units of work, and we’re currently putting 10 units of work into UFAI per year, and only 1 unit of work per year into FAI. If we had a completely stagnant economy, with 0% growth, we’d have 100 years to do something about this, or for something to happen to change this, before it’s too late. If the economy was instead growing at 5% per year, and this increased both UFAI and FAI work by 5% per year, the window of time “for something to happen” shrinks to about 35 years. The economic growth might increase the probability per year of “something happening” but it doesn’t seem like it would be enough to compensate for the shortened timeline.
Also: Many likely reasons for something to happen about this center around, in appropriate generality, the rationalist!EA movement. This movement is growing at a higher exponent than current economic growth.
I think this is the strongest single argument that economic growth might currently be bad. However even then what matters is the elasticity of movement growth rates with economic growth rates. I don’t know how we can measure this; I expect it’s positive and less than one, but I’m rather more confident about that lower bound than the upper bound.