I do “think that the pursuit of Friendly AI [and the avoidance of unfriendly AI] is by far the most important component of existential risk reduction”. I also think that SIAI is not addressing the most important problem in that regard. I suspect there’s a lot of people who would agree, for various reasons.
In my case, the logic is that I think:
1) That corporations, though not truly intelligent, are already superhuman and unFriendly.
2) That coordinated action (that is, strategic politics, in well-chosen solidarity with others with whom I have important differences) has the potential to reduce their power and/or increase their Friendliness
3) That this would, in turn, reduce the risk of them developing a first-mover unFriendly AI …
3a) … while also increasing the status of your ideas in a coalition which may be able to develop a Friendly one.
I recognize that points 2 and 3a are partially tribal and/or hope-seeking beliefs of mine, but think 1 and 3 are well-founded rationally.
Anyway, this is only one possible reason for parting ways with the SIAI and the FHI, without in any sense discounting the risks they are made to confront.
From your analysis, it seems that FHI would be very well aligned with your goals: it’s a high-profile, academic rather than corporate, entity which can publicize existential risks (and takes corporate creation of such seriously, IIRC).
Would this not be desirable, or is there any organization within the broader anticorporate movement you speak of that would even think to do the same with comparable competency?
I believe that explicitly political movements, not academic ones, are the only ones which are other-optimizing enough to fight the mal-optimization of corporations. And I think that at our current level of corporate power versus AI-relevant technological understanding, my energy is best spent fighting the former rather than advancing the latter (and I majored in cognitive science and work as a programmer, so I hold that same conclusion for most people.)
I realize that these beliefs are partly tribal (something which allows me to get along with my wife and friends) and partly hope-seeking (something which allows me to get up in the morning). I think that these are valid reasons to give a belief the benefit of the doubt. I would not, however, use these excuses to justify a belief with no rational basis, or to avoid considering an argument for the lack of rational basis. Anyway, even if one tried to rid oneself of tribal and hope-seeking biases, beyond the caveats in the previous sentence, I don’t think it would help one be appreciably more rational.
They contain humans. However, while corporations themselves are psychopathic, most are not controlled and staffed by psychopaths. This gives corporations (thank Darwin) cognitive biases which systematically reduce their intelligence when pursuing obviously unFriendly goals.
In the end, it depends on your definition of intelligence. The intelligence of a corporation in choosing strategies to fit its goals is sometimes of the level of natural selection (weak), sometimes human intelligence (true), and sometimes effective crowd intelligence (mildly superhuman). I’d guess that on the whole, they average somewhat below human intelligence (but much higher power) when pursuing explicitly unFriendly subgoals; and somewhat above human intelligence when pursuing subgoals that happen to be neutral or Friendly. But that does not necessarily mean they are on balance Friendly, because their root goals are not.
The basic idea with corporations is that they are kept in check by an even more powerful organisation: the government. If any corporation gets too big, the Monopolies and Mergers commission intervenes and splits it up. As far as I know, no corporation has ever overthrown their “parent” government.
I do “think that the pursuit of Friendly AI [and the avoidance of unfriendly AI] is by far the most important component of existential risk reduction”. I also think that SIAI is not addressing the most important problem in that regard. I suspect there’s a lot of people who would agree, for various reasons.
In my case, the logic is that I think:
1) That corporations, though not truly intelligent, are already superhuman and unFriendly.
2) That coordinated action (that is, strategic politics, in well-chosen solidarity with others with whom I have important differences) has the potential to reduce their power and/or increase their Friendliness
3) That this would, in turn, reduce the risk of them developing a first-mover unFriendly AI …
3a) … while also increasing the status of your ideas in a coalition which may be able to develop a Friendly one.
I recognize that points 2 and 3a are partially tribal and/or hope-seeking beliefs of mine, but think 1 and 3 are well-founded rationally.
Anyway, this is only one possible reason for parting ways with the SIAI and the FHI, without in any sense discounting the risks they are made to confront.
From your analysis, it seems that FHI would be very well aligned with your goals: it’s a high-profile, academic rather than corporate, entity which can publicize existential risks (and takes corporate creation of such seriously, IIRC).
Would this not be desirable, or is there any organization within the broader anticorporate movement you speak of that would even think to do the same with comparable competency?
I believe that explicitly political movements, not academic ones, are the only ones which are other-optimizing enough to fight the mal-optimization of corporations. And I think that at our current level of corporate power versus AI-relevant technological understanding, my energy is best spent fighting the former rather than advancing the latter (and I majored in cognitive science and work as a programmer, so I hold that same conclusion for most people.)
I realize that these beliefs are partly tribal (something which allows me to get along with my wife and friends) and partly hope-seeking (something which allows me to get up in the morning). I think that these are valid reasons to give a belief the benefit of the doubt. I would not, however, use these excuses to justify a belief with no rational basis, or to avoid considering an argument for the lack of rational basis. Anyway, even if one tried to rid oneself of tribal and hope-seeking biases, beyond the caveats in the previous sentence, I don’t think it would help one be appreciably more rational.
They get to use borrowed intelligence from their human symbiotes, though. ;-) (Or would they be symbionts? Hm...)
Re: coordinated action to tame corporations
One thing we need is corporation reputation systems. We have product reviews, and so forth—but the whole area is poorly organised.
Why are corporations “not truly intelligent”? They contain humans, surely. Would you say that humans are “not truly intelligent” either?
They contain humans. However, while corporations themselves are psychopathic, most are not controlled and staffed by psychopaths. This gives corporations (thank Darwin) cognitive biases which systematically reduce their intelligence when pursuing obviously unFriendly goals.
In the end, it depends on your definition of intelligence. The intelligence of a corporation in choosing strategies to fit its goals is sometimes of the level of natural selection (weak), sometimes human intelligence (true), and sometimes effective crowd intelligence (mildly superhuman). I’d guess that on the whole, they average somewhat below human intelligence (but much higher power) when pursuing explicitly unFriendly subgoals; and somewhat above human intelligence when pursuing subgoals that happen to be neutral or Friendly. But that does not necessarily mean they are on balance Friendly, because their root goals are not.
The basic idea with corporations is that they are kept in check by an even more powerful organisation: the government. If any corporation gets too big, the Monopolies and Mergers commission intervenes and splits it up. As far as I know, no corporation has ever overthrown their “parent” government.
Other governments however...