It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.
The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.
This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.
Money can be made, according to proverb, by being faster, luckier, or smarter. It’s impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it’s very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.
While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what’s more, you’ll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.
I’m sure this must have been addressed somewhere, of course—there is simply way too much written in too many places by too many smart people. But it is odd to me that SI’s page on Strategic Insight doesn’t have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.
It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.
The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.
This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.
Money can be made, according to proverb, by being faster, luckier, or smarter. It’s impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it’s very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.
While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what’s more, you’ll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.
I’m sure this must have been addressed somewhere, of course—there is simply way too much written in too many places by too many smart people. But it is odd to me that SI’s page on Strategic Insight doesn’t have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.