It seems to me that the premise of funding SI is that people smarter (or more appropriately specialized) than you will then be able to make discoveries that otherwise would be underfunded or wrongly-purposed.
But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.
Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigation, but that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.
For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation; that may still be the case; as the time gone on one could naturally expect that the opinions will form a distribution and the first organizations offering AI risk mitigation will pop up earlier than the time at which such work is effective. When we look into the past through the goggles of notoriety, we don’t see all the failed early starts.
For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation
Disagree. There are many remaining theoretical (philosophical and mathematical) difficulties whose investigation doesn’t depend on the current level of technology. It would’ve been better to start working on the problem 300 years ago, when AI risk was still far away. Value of information on this problem is high, and we don’t (didn’t) know that there is nothing to be discovered, it wouldn’t be surprising if some kind of progress is made.
I do think OP is right that in practice, 100 years ago, it would have been really hard to figure out what an AI issue looked like. This was pre-Godel, pre-decision-theory, pre-Bayesian-revolution, and pre-computer. Yes, a sufficiently competent Earth would be doing AI math before it had the technology for computers, in full awareness of what it meant—but that’s a pretty darned competent Earth we’re talking about.
I think it is fair to say Earth was doing the “AI math” before the computers. Extending to the today—there is a lot of mathematics to be done for a good, safe AI—but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?
I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of extraordinary effort planning skill is very low. Furthermore as the effort planning is, to some extent, a cross domain skill, the prior inefficacy (which was criticized by Holden) seem to be a fairly strong evidence against extraordinary skills in this area.
If my writings (on FAI, on decision theory, and on the form of applied-math-of-optimization called human rationality) so far haven’t convinced you that I stand a sufficient chance of identifying good math problems to solve to maintain the strength of an input into existential risk, you should probably fund CFAR instead. This is not, in any way shape or form, the same skill as the ability to manage a nonprofit. I have not ever, ever claimed to be good at managing people, which is why I kept trying to have other people doing it.
I’m not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.
This is not, in any way shape or form, the same skill as the ability to manage a nonprofit.
Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.
edit: Furthermore, when it comes to works on rationality as ‘applied math of optimization’, the most obvious way to classify those writings is to look for some great success attributable to your writings—some highly successful businessmen saying how much the article on such and such fallacy helped them succeed, that sort of thing.
It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.
The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.
This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.
Money can be made, according to proverb, by being faster, luckier, or smarter. It’s impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it’s very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.
While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what’s more, you’ll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.
I’m sure this must have been addressed somewhere, of course—there is simply way too much written in too many places by too many smart people. But it is odd to me that SI’s page on Strategic Insight doesn’t have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.
How would you even pose the question of AI risk to someone in the eighteenth century?
I’m trying to imagine what comes out the other end of Newton’s chronophone, but it sounds very much like “You should think really hard about how to prevent the creation of man-made gods.”
I don’t think it’s plausible that people could stumble on the problem statement 300 years ago, but within that hypothetical, it wouldn’t have been too early.
It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.
On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of “utility function” in the context of an AI? Is it a computable mathematical function over a model, such that the ‘intelligence’ component computes the action that results in maximum of that function taken over the world state resulting from the action?
Also, and not just wanting to flash academic applause lights but also genuinely curious, which mathematical successes have been due to effort planning? Even in my own mundane commercial programming experiences, the company which won the biggest was more “This is what we’d like, go away and do it and get back to us when it’s done...” than “We have this Gantt chart...”.
There are very few people who would have understood in the 18th century, but Leibniz would have understood in the 17th. He underestimated the difficulty in creating an AI, like everyone did before the 1970s, but he was explicitly trying to do it.
Your definition of “explicit” must be different from mine. Working on prototype arithmetic units and toying with the universal characteristic is AI research? He subscribed wholeheartedly to the ideographic myth; the most he would have been capable of is a machine that passes around LISP tokens.
In any case, based on the Monadology, I don’t believe Leibniz would consider the creation of a godlike entity to be theologically possible.
How about: “Eventually your machines will be so powerful they can grant wishes. But remember that they are not benevolent. What will you wish for when you can make a wish-machine?”
100 years ago it would seem to have been too early to fund work on AI risk mitigation
Hilarious, and an unfairly effective argument. I’d like to know such people, who can entertain an idea that will still be tantalizing yet unresolved a century out.
that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.
Yes. I agree with everything else, too, with the caveat that SI is not the first organization to draw attention to AI risk) - not that you said so.
But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.
Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigation, but that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.
For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation; that may still be the case; as the time gone on one could naturally expect that the opinions will form a distribution and the first organizations offering AI risk mitigation will pop up earlier than the time at which such work is effective. When we look into the past through the goggles of notoriety, we don’t see all the failed early starts.
Disagree. There are many remaining theoretical (philosophical and mathematical) difficulties whose investigation doesn’t depend on the current level of technology. It would’ve been better to start working on the problem 300 years ago, when AI risk was still far away. Value of information on this problem is high, and we don’t (didn’t) know that there is nothing to be discovered, it wouldn’t be surprising if some kind of progress is made.
I do think OP is right that in practice, 100 years ago, it would have been really hard to figure out what an AI issue looked like. This was pre-Godel, pre-decision-theory, pre-Bayesian-revolution, and pre-computer. Yes, a sufficiently competent Earth would be doing AI math before it had the technology for computers, in full awareness of what it meant—but that’s a pretty darned competent Earth we’re talking about.
I think it is fair to say Earth was doing the “AI math” before the computers. Extending to the today—there is a lot of mathematics to be done for a good, safe AI—but how are we to know that the SI has the actionable effort planning skills required to correctly identify and fund research in such mathematics?
I know that you believe that you have the required skills; but note that in my model such belief results from both the presence of extraordinary effort planning skill, and from absence of effort planning skills. The prior probability of extraordinary effort planning skill is very low. Furthermore as the effort planning is, to some extent, a cross domain skill, the prior inefficacy (which was criticized by Holden) seem to be a fairly strong evidence against extraordinary skills in this area.
If my writings (on FAI, on decision theory, and on the form of applied-math-of-optimization called human rationality) so far haven’t convinced you that I stand a sufficient chance of identifying good math problems to solve to maintain the strength of an input into existential risk, you should probably fund CFAR instead. This is not, in any way shape or form, the same skill as the ability to manage a nonprofit. I have not ever, ever claimed to be good at managing people, which is why I kept trying to have other people doing it.
I’m not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.
Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.
edit: Furthermore, when it comes to works on rationality as ‘applied math of optimization’, the most obvious way to classify those writings is to look for some great success attributable to your writings—some highly successful businessmen saying how much the article on such and such fallacy helped them succeed, that sort of thing.
It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.
The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.
This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.
Money can be made, according to proverb, by being faster, luckier, or smarter. It’s impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it’s very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.
While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what’s more, you’ll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.
I’m sure this must have been addressed somewhere, of course—there is simply way too much written in too many places by too many smart people. But it is odd to me that SI’s page on Strategic Insight doesn’t have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.
How would you even pose the question of AI risk to someone in the eighteenth century?
I’m trying to imagine what comes out the other end of Newton’s chronophone, but it sounds very much like “You should think really hard about how to prevent the creation of man-made gods.”
I don’t think it’s plausible that people could stumble on the problem statement 300 years ago, but within that hypothetical, it wouldn’t have been too early.
It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.
On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of “utility function” in the context of an AI? Is it a computable mathematical function over a model, such that the ‘intelligence’ component computes the action that results in maximum of that function taken over the world state resulting from the action?
Surely “Effort planning is a key to success”?
Also, and not just wanting to flash academic applause lights but also genuinely curious, which mathematical successes have been due to effort planning? Even in my own mundane commercial programming experiences, the company which won the biggest was more “This is what we’d like, go away and do it and get back to us when it’s done...” than “We have this Gantt chart...”.
There are very few people who would have understood in the 18th century, but Leibniz would have understood in the 17th. He underestimated the difficulty in creating an AI, like everyone did before the 1970s, but he was explicitly trying to do it.
Your definition of “explicit” must be different from mine. Working on prototype arithmetic units and toying with the universal characteristic is AI research? He subscribed wholeheartedly to the ideographic myth; the most he would have been capable of is a machine that passes around LISP tokens.
In any case, based on the Monadology, I don’t believe Leibniz would consider the creation of a godlike entity to be theologically possible.
How about: “Eventually your machines will be so powerful they can grant wishes. But remember that they are not benevolent. What will you wish for when you can make a wish-machine?”
Oh, wait… The tale of the Tower of Babel was told via chronophone by people from the future right before succumbing to uFAI!
That’s hindsight. Nobody could have reasonably foreseen the rise of very powerful computing machines that far ago.
Hilarious, and an unfairly effective argument. I’d like to know such people, who can entertain an idea that will still be tantalizing yet unresolved a century out.
Yes. I agree with everything else, too, with the caveat that SI is not the first organization to draw attention to AI risk) - not that you said so.