If your singularity stocks rapidly increase in value,
Why would they do that? anyone that believes the singularity is coming will want to sell them off, those that don’t have little reason to buy. Stock markets can’t price in their own non-existence.
Second, being a shareholder in the singularity could help you affect it.
Suppose google has almost reached AGI. $1000 of shares isn’t going to buy meaningful influence on the details of a particular (probably secretive) project. I would be better off arranging to frequent the same social clubs as the programmers, and getting into discussions about AI, or mailing a copy of “Superintelligence” by Bostrum to all the team.
anyone that believes the singularity is coming will want to sell them off
My model of the average non-altruistic, non-rationalist investor is that they will want to hold onto their singularity stocks in order to increase their odds of being wealthy/powerful in the posthuman world. BTW, I gave 2 reasons in my post for why a rationalist investor would want to hold onto their stocks.
those that don’t have little reason to buy
If a company creates some kind of transformative AI technology, we won’t know in advance just how transformative the technology will be, or how quickly the potential of the transformation will be realized. Suppose a company comes up with an AI breakthrough that lets robots automate most manual labor, but it requires a person-year of effort to automate any given job. That company is going to be tremendously valuable.
Let’s say an AI company comes up with something that’s clearly a breakthrough. And little by little, it starts automating jobs, starting with the easy jobs. Let’s suppose you’re a singularity-skeptic investor. You don’t think the singularity is going to happen any time soon. But this stock clearly seems like it could be super valuable and swallow a decent chunk of the world economy. So you buy.
Suppose google has almost reached AGI. $1000 of shares isn’t going to buy meaningful influence on the details of a particular (probably secretive) project.
I think there’s a lot depending on this “probably secretive” part. Let’s say the project isn’t secretive. Let’s say it’s like AlphaGo: something big and splashy, that gets even more attention than AlphaGo gets, and it seems to have widespread commercial applications. In that case, it will come up at shareholder meetings, and you can more easily be part of those conversations if you’re a shareholder (and also exercise your vote). Note that the shareholder conversations matter a lot: Programmers report to their boss, who reports to the CEO, who reports to the board, who report to the shareholders.
I would be better off arranging to frequent the same social clubs as the programmers, and getting into discussions about AI, or mailing a copy of “Superintelligence” by Bostrum to all the team.
These sorts of things are already happening. And you have to be careful doing this stuff because you run the risk of coming across as a shill. However, the corporate governance route to influencing the singularity appears neglected, and doesn’t run the risk of seeming like a shill: As an owner of the company that’s developing this breakthrough tech, you have some small amount of legal authority over how it’s deployed.
Why would they do that? anyone that believes the singularity is coming will want to sell them off, those that don’t have little reason to buy. Stock markets can’t price in their own non-existence.
Suppose google has almost reached AGI. $1000 of shares isn’t going to buy meaningful influence on the details of a particular (probably secretive) project. I would be better off arranging to frequent the same social clubs as the programmers, and getting into discussions about AI, or mailing a copy of “Superintelligence” by Bostrum to all the team.
My model of the average non-altruistic, non-rationalist investor is that they will want to hold onto their singularity stocks in order to increase their odds of being wealthy/powerful in the posthuman world. BTW, I gave 2 reasons in my post for why a rationalist investor would want to hold onto their stocks.
If a company creates some kind of transformative AI technology, we won’t know in advance just how transformative the technology will be, or how quickly the potential of the transformation will be realized. Suppose a company comes up with an AI breakthrough that lets robots automate most manual labor, but it requires a person-year of effort to automate any given job. That company is going to be tremendously valuable.
Let’s say an AI company comes up with something that’s clearly a breakthrough. And little by little, it starts automating jobs, starting with the easy jobs. Let’s suppose you’re a singularity-skeptic investor. You don’t think the singularity is going to happen any time soon. But this stock clearly seems like it could be super valuable and swallow a decent chunk of the world economy. So you buy.
I think there’s a lot depending on this “probably secretive” part. Let’s say the project isn’t secretive. Let’s say it’s like AlphaGo: something big and splashy, that gets even more attention than AlphaGo gets, and it seems to have widespread commercial applications. In that case, it will come up at shareholder meetings, and you can more easily be part of those conversations if you’re a shareholder (and also exercise your vote). Note that the shareholder conversations matter a lot: Programmers report to their boss, who reports to the CEO, who reports to the board, who report to the shareholders.
These sorts of things are already happening. And you have to be careful doing this stuff because you run the risk of coming across as a shill. However, the corporate governance route to influencing the singularity appears neglected, and doesn’t run the risk of seeming like a shill: As an owner of the company that’s developing this breakthrough tech, you have some small amount of legal authority over how it’s deployed.