I believe that SI is a valuable organisation and would be pleased if they were to keep their current level of funding.
I believe that withholding funds won’t work very well and that they are rational and intelligent enough to sooner or later become aware of their shortcomings and update accordingly.
I agree with this conclusion and also Karnofsky’s assessment that the hypotheses currently espoused by SI about how AI will play out are very speculative.
Do you feel this conflicts with opinions expressed on your blog? If not, why not?
Your question demands a thoughtful reply. I don’t have the time to do so right now.
Maybe the following snippet from a conversation with Holden can shed some light on what is really a very complicated subject:
I even believe that SIAI, even given its shortcomings, is valuable. It makes people think, especially the AI/CS crowd, and causes debate.
I certainly do not envy you for having to decide if it is a worthwhile charity.
What I am saying is that I wouldn’t mind if it kept its current funding. Although if I believed that there was even a small chance that they could be building the kind of AI that they envision, then in that case I would probably actively try to make them lose funding.
My position is probably inconsistent and highly volatile.
Just think about it this way. If you asked me if I do desire a world state where people like Eliezer Yudkowsky are able to think about AI risks, then I would say yes. If you asked me how come I wouldn’t allocate the money to protect poor people against malaria, then I can only admit that I don’t have a good answer. That is an extremely difficult problem.
As I said, I am glad that people like you are thinking about those questions. And if I had to decide, if it was either you, thinking about charitable giving in general, or Eliezer Yudkowsky, thinking about AI risks, then I would certainly fund you.
END OF EMAIL
I think it would be worth a lot of investment (not 1% of GDP! but more than $500,000 a year) to decrease the likelihood of an agent coming about that is far smarter than humans and hostile to them.
That doesn’t mean that I believe that “this is crunch time for the entire human species”. If it was at me to allocate the worlds resources I would also fund David Chalmers to think about consciousness.
I wrote that I “would be pleased” if they were to keep their current level of funding. I did not say that I recommend people to contribute money to SIAI or that I would personally donate money.
I might change my mind at any time though. I am still at the beginning of the exploration phase.
Well I for one believe SI to be wasting people’s money on work of building AIs out of fuzzy english concepts, the work that has zero value.
Giving more power (in $) for incompetents to steer the progress has negative expected utility (based on all prior instances of having incompetents in control). So it is paramount that those in control at SI demonstrate they are not incompetents.
I believe that SI is a valuable organisation and would be pleased if they were to keep their current level of funding.
I believe that withholding funds won’t work very well and that they are rational and intelligent enough to sooner or later become aware of their shortcomings and update accordingly.
I agree with this conclusion and also Karnofsky’s assessment that the hypotheses currently espoused by SI about how AI will play out are very speculative.
Do you feel this conflicts with opinions expressed on your blog? If not, why not?
Your question demands a thoughtful reply. I don’t have the time to do so right now.
Maybe the following snippet from a conversation with Holden can shed some light on what is really a very complicated subject:
I even believe that SIAI, even given its shortcomings, is valuable. It makes people think, especially the AI/CS crowd, and causes debate.
I certainly do not envy you for having to decide if it is a worthwhile charity.
What I am saying is that I wouldn’t mind if it kept its current funding. Although if I believed that there was even a small chance that they could be building the kind of AI that they envision, then in that case I would probably actively try to make them lose funding.
My position is probably inconsistent and highly volatile.
Just think about it this way. If you asked me if I do desire a world state where people like Eliezer Yudkowsky are able to think about AI risks, then I would say yes. If you asked me how come I wouldn’t allocate the money to protect poor people against malaria, then I can only admit that I don’t have a good answer. That is an extremely difficult problem.
As I said, I am glad that people like you are thinking about those questions. And if I had to decide, if it was either you, thinking about charitable giving in general, or Eliezer Yudkowsky, thinking about AI risks, then I would certainly fund you.
END OF EMAIL
I think it would be worth a lot of investment (not 1% of GDP! but more than $500,000 a year) to decrease the likelihood of an agent coming about that is far smarter than humans and hostile to them.
That doesn’t mean that I believe that “this is crunch time for the entire human species”. If it was at me to allocate the worlds resources I would also fund David Chalmers to think about consciousness.
I wrote that I “would be pleased” if they were to keep their current level of funding. I did not say that I recommend people to contribute money to SIAI or that I would personally donate money.
I might change my mind at any time though. I am still at the beginning of the exploration phase.
Okay.
Do you feel these views conflict with calling their views “Bullshit!” (emphasis yours) on your blog? If not, why not?
Well I for one believe SI to be wasting people’s money on work of building AIs out of fuzzy english concepts, the work that has zero value.
Giving more power (in $) for incompetents to steer the progress has negative expected utility (based on all prior instances of having incompetents in control). So it is paramount that those in control at SI demonstrate they are not incompetents.