This does sound a bit like the Copenhagen Interpretation of Ethics. If you interact with the companies (e.g. by holding shares in them), you can be blamed for their actions. However, you actually have a far bigger interaction with these companies by buying and using computers at all.
Holding shares in a company has, to first and second order, no effect on their ability to do what they do. The main effect is a much more distant third-order one: by selling shares, you (very slightly) reduce the demand for shares, which (even more slightly) reduces their share price, which (more slightly still) reduces their ability to obtain further funding.
Buying a computer has a first- or second-order effect: Nvidia directly profits from every new computer that uses their products. Even if you are careful not to buy any computer that uses Nvidia chipsets or licensed designs (much harder than it might seem; they don’t just make GPUs), there is a second-order effect in which you are increasing demand for computers (which are largely fungible) so that the equilibrium price shifts (very slightly) upward and Nvidia has more profit potential.
Similar effects follow from using computers on the Internet, via advertising, network effects, and other metrics of interest that increase the revenues of these companies.
From a consequentialist point of view, there seems little reason to sell shares in those companies. Even an argument based on incentives doesn’t really go through very well. Nonetheless I do agree with you that choosing to profit from actions that plausibly risk destroying humanity seems wrong, in a way that consequentialist ethics does not adequately capture.
I don’t think there are any ethically safe AI investments right now, in the sense of making an expected profit from actions that reduce this risk. I think that such investments could exist in some hypothetical world, but I don’t see how to make it work in the real one.
This does sound a bit like the Copenhagen Interpretation of Ethics. If you interact with the companies (e.g. by holding shares in them), you can be blamed for their actions. However, you actually have a far bigger interaction with these companies by buying and using computers at all.
That… Is not at all how the copenhagen interpretation of ethics works. You say someone has a “Copenhagen interpretation of ethics” when they criticize you for interacting with a problem even though you’re not contributing to it or even mildly ameliorating it. It’s not an “Copenhagen interpretation of ethics” to criticize someone for contributing to a problem in a way that’s milder than what they potentially could.
This does sound a bit like the Copenhagen Interpretation of Ethics. If you interact with the companies (e.g. by holding shares in them), you can be blamed for their actions. However, you actually have a far bigger interaction with these companies by buying and using computers at all.
Holding shares in a company has, to first and second order, no effect on their ability to do what they do. The main effect is a much more distant third-order one: by selling shares, you (very slightly) reduce the demand for shares, which (even more slightly) reduces their share price, which (more slightly still) reduces their ability to obtain further funding.
Buying a computer has a first- or second-order effect: Nvidia directly profits from every new computer that uses their products. Even if you are careful not to buy any computer that uses Nvidia chipsets or licensed designs (much harder than it might seem; they don’t just make GPUs), there is a second-order effect in which you are increasing demand for computers (which are largely fungible) so that the equilibrium price shifts (very slightly) upward and Nvidia has more profit potential.
Similar effects follow from using computers on the Internet, via advertising, network effects, and other metrics of interest that increase the revenues of these companies.
From a consequentialist point of view, there seems little reason to sell shares in those companies. Even an argument based on incentives doesn’t really go through very well. Nonetheless I do agree with you that choosing to profit from actions that plausibly risk destroying humanity seems wrong, in a way that consequentialist ethics does not adequately capture.
I don’t think there are any ethically safe AI investments right now, in the sense of making an expected profit from actions that reduce this risk. I think that such investments could exist in some hypothetical world, but I don’t see how to make it work in the real one.
That… Is not at all how the copenhagen interpretation of ethics works. You say someone has a “Copenhagen interpretation of ethics” when they criticize you for interacting with a problem even though you’re not contributing to it or even mildly ameliorating it. It’s not an “Copenhagen interpretation of ethics” to criticize someone for contributing to a problem in a way that’s milder than what they potentially could.
back of the napkin reasoning is that actually we have to PAY to reduce risk, so there’s no way to make money doing that