For (a), it would probably look like the late 90′s dot com bubble runup, except that it wouldn’t end with a bubble burst and most of the companies going under, instead it would just keep going, while world dramatically changed.
For (b), I don’t think we would really know until it had started, at which point things would go bad very, very quickly. I doubt that you could use price movements far in advance to predict it coming.
In general, markets can go down in price much faster than they went up. Scenario (a) would look like a continual parabolic rise, while (b) would just be a massive crash.
For (a), it would probably look like the late 90′s dot com bubble runup
Why? In both cases money becomes meaningless post-singularity.
If you expect a happy singularity in the near future, you should actually pull your money out of investments and spend it all on consumption (or risk mitigation).
My idea was that for (a), money was becoming worthless but ownership of the companies driving the singularity was not. In that case, the price of shares in those companies would skyrocket towards infinity as everyone piled all of that soon-to-be-worthless money into it.
Of course, if the ownership of those companies was not going to matter either, then what you said would be true.
if the ownership of those companies was not going to matter
I this is something that I think is neglected (in part because it’s not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who’s goals are implemented. If it’s a fast take-off, whoever cracks recursive self-improvement first basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think “We can create utopia beyond anyone’s wildest dreams” and instead to default to “We’ll skewer the competition in the next quarter.”
For (a), it would probably look like the late 90′s dot com bubble runup, except that it wouldn’t end with a bubble burst and most of the companies going under, instead it would just keep going, while world dramatically changed.
For (b), I don’t think we would really know until it had started, at which point things would go bad very, very quickly. I doubt that you could use price movements far in advance to predict it coming.
In general, markets can go down in price much faster than they went up. Scenario (a) would look like a continual parabolic rise, while (b) would just be a massive crash.
Why? In both cases money becomes meaningless post-singularity.
If you expect a happy singularity in the near future, you should actually pull your money out of investments and spend it all on consumption (or risk mitigation).
My idea was that for (a), money was becoming worthless but ownership of the companies driving the singularity was not. In that case, the price of shares in those companies would skyrocket towards infinity as everyone piled all of that soon-to-be-worthless money into it.
Of course, if the ownership of those companies was not going to matter either, then what you said would be true.
I this is something that I think is neglected (in part because it’s not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who’s goals are implemented. If it’s a fast take-off, whoever cracks recursive self-improvement first basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think “We can create utopia beyond anyone’s wildest dreams” and instead to default to “We’ll skewer the competition in the next quarter.”