The point is that the extent and nature of charity that is best for you individually is not the same as that which maximizes social welfare. The optimal extent of charity for you personally might be 0.
If might be optimal for you personally to go work as an actuary and retire at 40, or to pursue your personal interest in elliptic curves research. Whatever.
I can see how taking charities seriously may drain you of resources. But I don’t see how it applies to existential risk reduction activities. Have you invented some method of spending all your money to get FAI faster, or something?
Yes, that was a dig at SIAI and similar institutions. I honestly have no idea why we need them. If academia doesn’t work for him, Eliezer could have pursued his ideas and published them online while working a day job, as lots of scientists did. He’d make just the same impact.
I would not have been able to write and pursue a day job at the same time. You seem to have incredibly naive ideas about the amount of time and energy needed to accomplish worthwhile things. There are historical exceptions to this rule, but they are (a) exceptions and (b) we don’t know how much faster they could have worked if they’d been full-time.
A day job doesn’t have to exhaust you. For example, I have a “day job” as a programmer where I show up at the office once a week, so I have more free time than I know what to do with. I don’t believe you are less capable of finding such a job than me, and I don’t believe that none of your major accomplishments were made while multitasking.
It’s not trivial to find one that doesn’t, and takes up only a fraction of your time. You need luck or ingenuity. It makes things simpler if you can just get that problem out of the way—after all, it’s a simple matter, something we know how to do. Trivial (and not so trivial) inconveniences that have known resolutions should be removed, it’s that simple.
You’re silly. I suppose if you started doing things in your free time that are as interesting as what I do in my professional full-time workdays I would pay attention to you again.
You did a good thing: my last two top-level posts were partly motivated by this comment of yours. And for the record, at the same time as I was writing them, at my day job we launched a website with daily maps of forest fires in Russia that got us 40k visitors a day for awhile, got featured on major news sites and on TV, and got used by actual emergency teams. It’s been a crazy month. Thankfully, right now Moscow is no longer covered in smoke and I can relax a little.
Coincidentally, in that time I had several discussions with different people about the same topic. For some reason all of them felt that you have to be “serious” about whatever you do, do it “properly”, etc. I just don’t believe it. What matters is the results. There’s no law of nature saying you can’t get good results while viewing yourself as an amateur light-headed butterfly. In fact, I think it helps!
There’s no law of nature saying you can’t get good results while viewing yourself as an amateur light-headed butterfly. In fact, I think it helps!
You have to work on systematically developing mastery though. Difficult problems (especially the ones without clear problem statements) require thousands of hours of background-building and familiarizing yourself with the problem to make steps in the right directions, even where these steps appear obvious and easy in retrospect, and where specific subproblems can be resolved easily without having that background. You need to be able to ask the right questions, not only to answer them.
It doesn’t seem natural to describe such work as an act of “amateur light-headed butterfly”. Butterflies don’t work in coal mines.
Sorry, can’t parse. Are you making any substantive argument? What’s the difference between your worktime now and the free time you’d have if you worked an easy day job, or supported yourself with contract programming, or something? Is it only that there’s more of it, or is there a qualitative difference?
Time, mental energy, focus. I cannot work two jobs and do justice to either of them.
I am feeling viscerally insulted by your assertion that anything I do can be done in my spare time. Let’s try that with nuclear engineers and physicists and lawyers and electricians, shall we? Oh, I’m sorry, was that work actually important enough to deserve a real effort or something?
Sorry, I didn’t mean to insult you. Also I didn’t downvote your comment, someone else did.
What worries me is the incongruity of it all. What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money? What if this foundation went on for ten years without actually publishing novel rigorous results, only doing advocacy for the forthcoming theory that will revolutionize the physics world? This is just, uh...
A day job is actually the second recourse that comes to mind. The first recourse is working in academia. There’s plenty of people there doing research in logic, probability, computation theory, game theory, decision theory or any other topic you consider important. Robin Hanson is in academia. Nick Bostrom is in academia. Why build SIAI?
Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI’s ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one’s peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
For an important topic, it makes sense to have a dedicated research center. And in the end, SIAI is supposed to create a Friendly AI for real, not just to design it. As it turns out, SIAI also manages to serve many other purposes, like organizing the summits. As for FAI theory, I think it would have developed more slowly if Eliezer had apprenticed himself to a computer science department somewhere.
However, I do think we are at a point where the template of the existing FAI solution envisaged by SIAI could be imitated by mainstream institutions. That solution is, more or less, figure out the utility function implicitly supposed by the human decision process, figure out the utility function produced by reflective idealization of the natural utility function, create a self-enhancing AI with this second utility function. I think that is an approach to ethical AI which could easily become the consensus idea of what should be done.
What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money?
Setting up a Relativity Foundation is a harder job than being a patent clerk.
What’s the difference between your worktime now and the free time you’d have if you worked an easy day job, or supported yourself with contract programming, or something?
The difference is the attention spent on contract programming. If this can be eliminated, it should be. And it can.
From what I understand, SIAI was meant to eventually support at least 10 full time FAI researchers/implementers. How is Eliezer supposed to “make the same impact” by doing research part time while working a day job?
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI. Eliezer can make impact the same way he always does: by proving to the Internet that the topic is interesting.
Why? I gave the example of Wei Dai who works independently from the SIAI. If you know any people besides Eliezer who do comparable work at the SIAI, who are they?
The problem with your example is that I don’t work on FAI, I work on certain topics of philosophical interest to me that happen to be relevant to FAI theory. If I were interested in actually building an FAI, I’d definitely want a secure source of funding for a whole team to work on it full time, and a building to work in. It seems implausible that that’s not a big improvement (in likelihood of success) over a bunch of volunteers working part time and just collaborating over the Internet.
More generally, money tends to be useful for getting anything accomplished. You seem to be saying that FAI is an exception, and I really don’t understand why… Or are you just saying that SIAI in particular is doing a bad job with the money that it’s getting? If that’s the case, why not offer some constructive suggestions instead of just making “digs” at it?
I don’t believe FAI is ready to be an engineering project. As Richard Hamming would put it, “we do not have an attack”. You can’t build a 747 before some hobbyist invents the first flyer. The “throw money and people at it” approach has been tried many times with AGI, how is FAI different? I think right now most progress should come from people like you, satisfying their personal interest. As for the best use of SIAI money, I’d use Givewell to get rid of it, or just throw some parties and have fun all around, because money isn’t the limiting factor in making math breakthroughs happen.
I think right now most progress should come from people like you, satisfying their personal interest.
I think the problem with that is that most people have multiple interests, or their interests can shift (perhaps subconsciously) based on considerations of money and status. FAI-related fields have to compete with other fields for a small pool of highly capable researchers, and the lack of money and status (which would come with funding) does not help.
I don’t believe FAI is ready to be an engineering project.
Me either, but I think that one, SIAI can use the money to support FAI-related research in the mean time, and two, given that time is not on our side, it seems like a good idea to build up the necessary institutional infrastructure to support FAI as an engineering project, just in case someone makes an unexpected theoretical breakthrough.
Sorry for deleting my comment, I didn’t think you’d answer it so quickly. For posterity, it said: “Is their research secret? Any pointers?”
Here’s the list of SIAI publications. Apart from Eliezer’s writings, there’s only one moderately interesting item on the list: Peter de Blanc’s “convergence of expected utility” (or divergence, rather). That’s… good, I guess? My point stands.
Yes. If anyone finds out why Marcello’s research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
Now why do you even ask why should people be afraid of something going terribly wrong at SIAI? Keeping it secret in order to avoid signaling the moment where it becomes necessary to keep it secret? Hmm...
If academia doesn’t work for him, Eliezer could have pursued his ideas and published them online while working a day job, as lots of scientists did. He’d make just the same impact.
Isn’t it better to have an option of pursuing your research without having to work a day job? Presumably, this will allow you to focus more on research...
But… create a big organization that generates no useful output, except providing you with some money to live on? Is it really the path of least effort? SIAI has existed for 10 years now and here are its glorious accomplishments broken down by year. Frankly, I’d be less embarrassed if Eliezer were just one person doing research!
SIAI has existed for 10 years now and here are its glorious accomplishments broken down by year. Frankly, I’d be less embarrassed if Eliezer were just one person doing research!
Yes well, in retrospect many things are seen as suboptimal. Remember that SIAI was founded back when Eliezer didn’t figure out importance of Friendliness and thought we need a big concerted effort to develop an AGI. Later, he was unable to interest sufficiently qualified people to do the research on FAI (equivalently, to explain the problem so that qualified people would both understand it and take seriously). This led to blogging on Overcoming Bias and now Less Wrong, which does seem to be a successful, if insanely inefficient, way of explaining the problem. Current SIAI seems to have a chance of mutating into a source of funding for more serious FAI research, but as multifoliaterose points out, right now publicity seems to be a more efficient way to eventually getting things done, since we need to actually find researchers to make the accomplishments you protest about the absence of.
Since you have advanced the state of the art both here and at decision-theory-workshop, I will take this opportunity to ask you: is your research funded by SIAI? Would it progress faster if it were? Is money the limiting factor?
The advantages to an organization are mutual support, improving the odds of continuity if something happens to Eliezar, and improving the odds of getting more people who can do high level work.
I don’t have a feeling for how fast new organizations for original thought and research should be expected to get things done. Anyone have information?
improving the odds of continuity if something happens to Eliezer, and improving the odds of getting more people who can do high level work
I don’t see who else does high level work at SIAI and who will continue it if Eliezer gets hit by a bus. Wei Dai had the most success building on Eliezer’s ideas, but he’s not a SIAI employee and SIAI didn’t spark his interest in the topic.
The point is that the extent and nature of charity that is best for you individually is not the same as that which maximizes social welfare. The optimal extent of charity for you personally might be 0.
If might be optimal for you personally to go work as an actuary and retire at 40, or to pursue your personal interest in elliptic curves research. Whatever.
I can see how taking charities seriously may drain you of resources. But I don’t see how it applies to existential risk reduction activities. Have you invented some method of spending all your money to get FAI faster, or something?
Yes, that was a dig at SIAI and similar institutions. I honestly have no idea why we need them. If academia doesn’t work for him, Eliezer could have pursued his ideas and published them online while working a day job, as lots of scientists did. He’d make just the same impact.
I would not have been able to write and pursue a day job at the same time. You seem to have incredibly naive ideas about the amount of time and energy needed to accomplish worthwhile things. There are historical exceptions to this rule, but they are (a) exceptions and (b) we don’t know how much faster they could have worked if they’d been full-time.
A day job doesn’t have to exhaust you. For example, I have a “day job” as a programmer where I show up at the office once a week, so I have more free time than I know what to do with. I don’t believe you are less capable of finding such a job than me, and I don’t believe that none of your major accomplishments were made while multitasking.
It’s not trivial to find one that doesn’t, and takes up only a fraction of your time. You need luck or ingenuity. It makes things simpler if you can just get that problem out of the way—after all, it’s a simple matter, something we know how to do. Trivial (and not so trivial) inconveniences that have known resolutions should be removed, it’s that simple.
You’re silly. I suppose if you started doing things in your free time that are as interesting as what I do in my professional full-time workdays I would pay attention to you again.
You did a good thing: my last two top-level posts were partly motivated by this comment of yours. And for the record, at the same time as I was writing them, at my day job we launched a website with daily maps of forest fires in Russia that got us 40k visitors a day for awhile, got featured on major news sites and on TV, and got used by actual emergency teams. It’s been a crazy month. Thankfully, right now Moscow is no longer covered in smoke and I can relax a little.
Coincidentally, in that time I had several discussions with different people about the same topic. For some reason all of them felt that you have to be “serious” about whatever you do, do it “properly”, etc. I just don’t believe it. What matters is the results. There’s no law of nature saying you can’t get good results while viewing yourself as an amateur light-headed butterfly. In fact, I think it helps!
You have to work on systematically developing mastery though. Difficult problems (especially the ones without clear problem statements) require thousands of hours of background-building and familiarizing yourself with the problem to make steps in the right directions, even where these steps appear obvious and easy in retrospect, and where specific subproblems can be resolved easily without having that background. You need to be able to ask the right questions, not only to answer them.
It doesn’t seem natural to describe such work as an act of “amateur light-headed butterfly”. Butterflies don’t work in coal mines.
Sorry, can’t parse. Are you making any substantive argument? What’s the difference between your worktime now and the free time you’d have if you worked an easy day job, or supported yourself with contract programming, or something? Is it only that there’s more of it, or is there a qualitative difference?
Time, mental energy, focus. I cannot work two jobs and do justice to either of them.
I am feeling viscerally insulted by your assertion that anything I do can be done in my spare time. Let’s try that with nuclear engineers and physicists and lawyers and electricians, shall we? Oh, I’m sorry, was that work actually important enough to deserve a real effort or something?
Sorry, I didn’t mean to insult you. Also I didn’t downvote your comment, someone else did.
What worries me is the incongruity of it all. What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money? What if this foundation went on for ten years without actually publishing novel rigorous results, only doing advocacy for the forthcoming theory that will revolutionize the physics world? This is just, uh...
A day job is actually the second recourse that comes to mind. The first recourse is working in academia. There’s plenty of people there doing research in logic, probability, computation theory, game theory, decision theory or any other topic you consider important. Robin Hanson is in academia. Nick Bostrom is in academia. Why build SIAI?
Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI’s ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one’s peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Thank you for the balanced answer.
Are people in academia really unable to spend their “peak years” researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom’s anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson’s economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom’s stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There’s a growing (but still pretty marginal, in scale and status) “machine ethics” field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
For an important topic, it makes sense to have a dedicated research center. And in the end, SIAI is supposed to create a Friendly AI for real, not just to design it. As it turns out, SIAI also manages to serve many other purposes, like organizing the summits. As for FAI theory, I think it would have developed more slowly if Eliezer had apprenticed himself to a computer science department somewhere.
However, I do think we are at a point where the template of the existing FAI solution envisaged by SIAI could be imitated by mainstream institutions. That solution is, more or less, figure out the utility function implicitly supposed by the human decision process, figure out the utility function produced by reflective idealization of the natural utility function, create a self-enhancing AI with this second utility function. I think that is an approach to ethical AI which could easily become the consensus idea of what should be done.
Setting up a Relativity Foundation is a harder job than being a patent clerk.
The difference is the attention spent on contract programming. If this can be eliminated, it should be. And it can.
From what I understand, SIAI was meant to eventually support at least 10 full time FAI researchers/implementers. How is Eliezer supposed to “make the same impact” by doing research part time while working a day job?
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI. Eliezer can make impact the same way he always does: by proving to the Internet that the topic is interesting.
Again: why isn’t it obvious to you that it would be easier for these people to have a source of funding and a building to work in?
No. Just no.
Why? I gave the example of Wei Dai who works independently from the SIAI. If you know any people besides Eliezer who do comparable work at the SIAI, who are they?
The problem with your example is that I don’t work on FAI, I work on certain topics of philosophical interest to me that happen to be relevant to FAI theory. If I were interested in actually building an FAI, I’d definitely want a secure source of funding for a whole team to work on it full time, and a building to work in. It seems implausible that that’s not a big improvement (in likelihood of success) over a bunch of volunteers working part time and just collaborating over the Internet.
More generally, money tends to be useful for getting anything accomplished. You seem to be saying that FAI is an exception, and I really don’t understand why… Or are you just saying that SIAI in particular is doing a bad job with the money that it’s getting? If that’s the case, why not offer some constructive suggestions instead of just making “digs” at it?
I don’t believe FAI is ready to be an engineering project. As Richard Hamming would put it, “we do not have an attack”. You can’t build a 747 before some hobbyist invents the first flyer. The “throw money and people at it” approach has been tried many times with AGI, how is FAI different? I think right now most progress should come from people like you, satisfying their personal interest. As for the best use of SIAI money, I’d use Givewell to get rid of it, or just throw some parties and have fun all around, because money isn’t the limiting factor in making math breakthroughs happen.
I think the problem with that is that most people have multiple interests, or their interests can shift (perhaps subconsciously) based on considerations of money and status. FAI-related fields have to compete with other fields for a small pool of highly capable researchers, and the lack of money and status (which would come with funding) does not help.
Me either, but I think that one, SIAI can use the money to support FAI-related research in the mean time, and two, given that time is not on our side, it seems like a good idea to build up the necessary institutional infrastructure to support FAI as an engineering project, just in case someone makes an unexpected theoretical breakthrough.
Marcello, Anna Salamon, Carl Shulman, Nick Tarleton, plus a few up-and-coming people I am not acquainted with.
I don’t do any work comparable to Eliezer’s.
Why don’t you? You are brilliant, and you understand the problem statement, you merely need to study the right things to get started.
I don’t do any original work comparable to Eliezer.
I don’t do anything comparable to Eliezer.
Is their research secret? Any pointers?
Marcello’s research is secret, but not that of the others.
Sorry for deleting my comment, I didn’t think you’d answer it so quickly. For posterity, it said: “Is their research secret? Any pointers?”
Here’s the list of SIAI publications. Apart from Eliezer’s writings, there’s only one moderately interesting item on the list: Peter de Blanc’s “convergence of expected utility” (or divergence, rather). That’s… good, I guess? My point stands.
Is it secret why it’s secret? I can’t imagine.
Yes. If anyone finds out why Marcello’s research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
Now why do you even ask why should people be afraid of something going terribly wrong at SIAI? Keeping it secret in order to avoid signaling the moment where it becomes necessary to keep it secret? Hmm...
Isn’t it better to have an option of pursuing your research without having to work a day job? Presumably, this will allow you to focus more on research...
But… create a big organization that generates no useful output, except providing you with some money to live on? Is it really the path of least effort? SIAI has existed for 10 years now and here are its glorious accomplishments broken down by year. Frankly, I’d be less embarrassed if Eliezer were just one person doing research!
Yes well, in retrospect many things are seen as suboptimal. Remember that SIAI was founded back when Eliezer didn’t figure out importance of Friendliness and thought we need a big concerted effort to develop an AGI. Later, he was unable to interest sufficiently qualified people to do the research on FAI (equivalently, to explain the problem so that qualified people would both understand it and take seriously). This led to blogging on Overcoming Bias and now Less Wrong, which does seem to be a successful, if insanely inefficient, way of explaining the problem. Current SIAI seems to have a chance of mutating into a source of funding for more serious FAI research, but as multifoliaterose points out, right now publicity seems to be a more efficient way to eventually getting things done, since we need to actually find researchers to make the accomplishments you protest about the absence of.
Since you have advanced the state of the art both here and at decision-theory-workshop, I will take this opportunity to ask you: is your research funded by SIAI? Would it progress faster if it were? Is money the limiting factor?
I’ll reply privately via e-mail (SIAI doesn’t fund me, and it’d be helpful if a few unlikely things were different).
For the record, Vladimir did reply.
The advantages to an organization are mutual support, improving the odds of continuity if something happens to Eliezar, and improving the odds of getting more people who can do high level work.
I don’t have a feeling for how fast new organizations for original thought and research should be expected to get things done. Anyone have information?
I don’t see who else does high level work at SIAI and who will continue it if Eliezer gets hit by a bus. Wei Dai had the most success building on Eliezer’s ideas, but he’s not a SIAI employee and SIAI didn’t spark his interest in the topic.
Sure, easy: just donate 100% of your disposable income to SIAI.