I do think OP is right that in practice, 100 years ago, it would have been really hard to figure out what an AI issue looked like. This was pre-Godel, pre-decision-theory, pre-Bayesian-revolution, and pre-computer. Yes, a sufficiently competent Earth would be doing AI math before it had the technology for computers, in full awareness of what it meant—but that’s a pretty darned competent Earth we’re talking about.
I think it is fair to say Earth was doing the “AI math” before the computers. Extending to the today—there is a lot of mathematics to be done for a good, safe AI—but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?
I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of extraordinary effort planning skill is very low. Furthermore as the effort planning is, to some extent, a cross domain skill, the prior inefficacy (which was criticized by Holden) seem to be a fairly strong evidence against extraordinary skills in this area.
If my writings (on FAI, on decision theory, and on the form of applied-math-of-optimization called human rationality) so far haven’t convinced you that I stand a sufficient chance of identifying good math problems to solve to maintain the strength of an input into existential risk, you should probably fund CFAR instead. This is not, in any way shape or form, the same skill as the ability to manage a nonprofit. I have not ever, ever claimed to be good at managing people, which is why I kept trying to have other people doing it.
I’m not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.
This is not, in any way shape or form, the same skill as the ability to manage a nonprofit.
Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.
edit: Furthermore, when it comes to works on rationality as ‘applied math of optimization’, the most obvious way to classify those writings is to look for some great success attributable to your writings—some highly successful businessmen saying how much the article on such and such fallacy helped them succeed, that sort of thing.
It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.
The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.
This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.
Money can be made, according to proverb, by being faster, luckier, or smarter. It’s impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it’s very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.
While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what’s more, you’ll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.
I’m sure this must have been addressed somewhere, of course—there is simply way too much written in too many places by too many smart people. But it is odd to me that SI’s page on Strategic Insight doesn’t have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.
I do think OP is right that in practice, 100 years ago, it would have been really hard to figure out what an AI issue looked like. This was pre-Godel, pre-decision-theory, pre-Bayesian-revolution, and pre-computer. Yes, a sufficiently competent Earth would be doing AI math before it had the technology for computers, in full awareness of what it meant—but that’s a pretty darned competent Earth we’re talking about.
I think it is fair to say Earth was doing the “AI math” before the computers. Extending to the today—there is a lot of mathematics to be done for a good, safe AI—but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?
I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of extraordinary effort planning skill is very low. Furthermore as the effort planning is, to some extent, a cross domain skill, the prior inefficacy (which was criticized by Holden) seem to be a fairly strong evidence against extraordinary skills in this area.
If my writings (on FAI, on decision theory, and on the form of applied-math-of-optimization called human rationality) so far haven’t convinced you that I stand a sufficient chance of identifying good math problems to solve to maintain the strength of an input into existential risk, you should probably fund CFAR instead. This is not, in any way shape or form, the same skill as the ability to manage a nonprofit. I have not ever, ever claimed to be good at managing people, which is why I kept trying to have other people doing it.
I’m not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.
Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.
edit: Furthermore, when it comes to works on rationality as ‘applied math of optimization’, the most obvious way to classify those writings is to look for some great success attributable to your writings—some highly successful businessmen saying how much the article on such and such fallacy helped them succeed, that sort of thing.
It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.
The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.
This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.
Money can be made, according to proverb, by being faster, luckier, or smarter. It’s impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it’s very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.
While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what’s more, you’ll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.
I’m sure this must have been addressed somewhere, of course—there is simply way too much written in too many places by too many smart people. But it is odd to me that SI’s page on Strategic Insight doesn’t have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.