Does “FAI-focused” mean what I called code first? What are your thoughts on that post and its followup? What is this new non-profit planning to do differently from SIAI and why? What are the other things that you could be doing?
Jah. Well, at least determining whether or not “code first” is even reasonable, yeah, which is a difficult question in itself and only partially tied in with making direct progress on FAI.
What are your thoughts on that post and its followup?
You seem to have missed Oracle AI? (Eliezer’s dismissal of it isn’t particularly meaningful.) I agree with your concerns. This is why the main focus would at least initially be determining whether or not “code first” is a plausible approach (difficulty-wise and safety-wise). The value of information on that question is incredibly high and as you’ve pointed out it has not been sufficiently researched.
What is this new non-profit planning to do differently from SIAI and why?
Basically everything. SingInst is focused on funding a large research program and gaining the prestige necessary to influence (academic) culture and academic and political policy. They’re not currently doing any research on Friendly AI, and their political situation is such that I don’t expect them to be able to do so effectively for a while, if ever. I will not clarify this. (Actually their research associates are working on FAI-related things, but SingInst doesn’t pay them to do that.)
What are the other things that you could be doing?
Learning, mostly. Working with an unnamed group of x-risk-cognizant people that LW hasn’t heard of, in a way unrelated to their setting up a non-profit.
They’re not currently doing any research on Friendly AI, and their political situation is such that I don’t expect them to be able to do so effectively for a while, if ever.
My understanding is that SIAI recently tried to set up a new in-house research team to do preliminary research into FAI (i.e., not try to build an FAI yet, but just do whatever research that might be eventually helpful to that project). This effort didn’t get off the ground, but my understanding again is that it was because the researchers they tried to recruit had various reasons for not joining SIAI at this time. I was one of those they tried to recruit, and while I don’t know what the others’ reasons were, mine were mostly personal and not related to politics.
You must also know all this, since you were involved in this effort. So I’m confused why you say SIAI won’t be doing effective research on FAI due to its “political situation”. Did the others not join SIAI because they thought SIAI was in a bad political situation? (This seems unlikely to me.) Or are you referring to the overall lack of qualified, recruitable researchers as a “political situation”? If you are, why do you think this new organization would be able to do better?
(Or did you perhaps not learn the full story, and thought SIAI stopped this effort for political reasons?)
The answer to your question isn’t among your list of possible answers. The recent effort to start an in-house research team was a good attempt and didn’t fail for political reasons. I am speaking of other things. However I want to take a few weeks off from discussion of such topics; I seem to have given off the entirely wrong impression and would prefer to start such discussion anew in a better context, e.g. one that better emphasizes cooperation and tentativeness rather than reactionary competition. My apologies.
I was trying to quickly gauge vague interest in a vague notion.
I won’t give that evidence here.
(I won’t substantiate that claim here.)
I will not clarify this.
The answer to your question isn’t among your list of possible answers.
I find this stressful; it’s why I make token attempts to communicate in extremely abstract or indirect ways with Less Wrong, despite the apparent fruitlessness. But there’s really nothing for it.
It’s a good heuristic, but can be very psychologically difficult. E.g. if you think that not even trying to communicate will be seen as unjustified in retrospect even if people should know that there was no obvious way for you to communicate. This has happened enough to me that the thought of just giving up on communication is highly aversive; my fear of being blamed for not preventing others to take unjustified actions (that will cause me, them, and the universe counterfactually-needless pain) is too great. But whatever, I’m starting to get over it.
Like, I remember a pigeon dying… people dying… a girl who starved herself… a girl who cut herself… a girl who wanted to commit suicide… just, trust me, there are reasons that I’m afraid. I could talk about those reasons but I’d rather not. It’s just, if you don’t even make a token gesture it’s like you don’t even care at all, and it’s easier to be unjustified in a way that can be made to look sorta like caring than in a way that looks like thoughtlessness or carelessness.
(ETA: A lot of the time when people give me or others advice I mentally translate it to “The solution is simple, just shut up and be evil.”.)
I have no particular attachments to SIAI and would love to see a more effective Singularitarian organization formed if that were possible. I’m just having genuine trouble understanding why you think this new proposed organization will be able to do more effective FAI research. Perhaps you could use these few weeks off to ask some trusted advisors how to better communicate this point. (I understand you have sensitive information that you can’t reveal, but I’m guessing that you can do better even within that constraint.)
Perhaps you could use these few weeks off to ask some trusted advisors how to better communicate this point.
This is exactly the strategy I’ve decided to enact, e.g. talking to Anna. Thanks for being… gentle, I suppose? I’ve been getting a lot of flak lately, it’s nice to get some non-insulting advice sometimes. :)
(Somehow I completely failed to communicate the tentativeness of the ideas I was throwing around; in my head I was giving it about a 3% chance that I’d actually work on helping build an organization but I seem to have given off an impression of about 30%. I think this caused everyone’s brains to enter politics mode, which is not a good mode for brains to be in.)
I have no particular attachments to SIAI and would love to see a more effective Singularitarian organization formed if that were possible.
It’s rather strange how the SIAI is secretive. The military projects are secretive, the commercial projects are secretive alas: so few value transparency. An open project would surely do better, through being more obviously trustworthy and accountable, being better able to use talent across the internet, etc. I figure if the SIAI persists in not getting to grips with this issue, some other organisation will.
Does “FAI-focused” mean what I called code first? What are your thoughts on that post and its followup? What is this new non-profit planning to do differently from SIAI and why? What are the other things that you could be doing?
Incomplete response:
Jah. Well, at least determining whether or not “code first” is even reasonable, yeah, which is a difficult question in itself and only partially tied in with making direct progress on FAI.
You seem to have missed Oracle AI? (Eliezer’s dismissal of it isn’t particularly meaningful.) I agree with your concerns. This is why the main focus would at least initially be determining whether or not “code first” is a plausible approach (difficulty-wise and safety-wise). The value of information on that question is incredibly high and as you’ve pointed out it has not been sufficiently researched.
Basically everything. SingInst is focused on funding a large research program and gaining the prestige necessary to influence (academic) culture and academic and political policy. They’re not currently doing any research on Friendly AI, and their political situation is such that I don’t expect them to be able to do so effectively for a while, if ever. I will not clarify this. (Actually their research associates are working on FAI-related things, but SingInst doesn’t pay them to do that.)
Learning, mostly. Working with an unnamed group of x-risk-cognizant people that LW hasn’t heard of, in a way unrelated to their setting up a non-profit.
My understanding is that SIAI recently tried to set up a new in-house research team to do preliminary research into FAI (i.e., not try to build an FAI yet, but just do whatever research that might be eventually helpful to that project). This effort didn’t get off the ground, but my understanding again is that it was because the researchers they tried to recruit had various reasons for not joining SIAI at this time. I was one of those they tried to recruit, and while I don’t know what the others’ reasons were, mine were mostly personal and not related to politics.
You must also know all this, since you were involved in this effort. So I’m confused why you say SIAI won’t be doing effective research on FAI due to its “political situation”. Did the others not join SIAI because they thought SIAI was in a bad political situation? (This seems unlikely to me.) Or are you referring to the overall lack of qualified, recruitable researchers as a “political situation”? If you are, why do you think this new organization would be able to do better?
(Or did you perhaps not learn the full story, and thought SIAI stopped this effort for political reasons?)
The answer to your question isn’t among your list of possible answers. The recent effort to start an in-house research team was a good attempt and didn’t fail for political reasons. I am speaking of other things. However I want to take a few weeks off from discussion of such topics; I seem to have given off the entirely wrong impression and would prefer to start such discussion anew in a better context, e.g. one that better emphasizes cooperation and tentativeness rather than reactionary competition. My apologies.
If you can’t say anything, don’t say anything.
It’s a good heuristic, but can be very psychologically difficult. E.g. if you think that not even trying to communicate will be seen as unjustified in retrospect even if people should know that there was no obvious way for you to communicate. This has happened enough to me that the thought of just giving up on communication is highly aversive; my fear of being blamed for not preventing others to take unjustified actions (that will cause me, them, and the universe counterfactually-needless pain) is too great. But whatever, I’m starting to get over it.
Like, I remember a pigeon dying… people dying… a girl who starved herself… a girl who cut herself… a girl who wanted to commit suicide… just, trust me, there are reasons that I’m afraid. I could talk about those reasons but I’d rather not. It’s just, if you don’t even make a token gesture it’s like you don’t even care at all, and it’s easier to be unjustified in a way that can be made to look sorta like caring than in a way that looks like thoughtlessness or carelessness.
(ETA: A lot of the time when people give me or others advice I mentally translate it to “The solution is simple, just shut up and be evil.”.)
I have no particular attachments to SIAI and would love to see a more effective Singularitarian organization formed if that were possible. I’m just having genuine trouble understanding why you think this new proposed organization will be able to do more effective FAI research. Perhaps you could use these few weeks off to ask some trusted advisors how to better communicate this point. (I understand you have sensitive information that you can’t reveal, but I’m guessing that you can do better even within that constraint.)
This is exactly the strategy I’ve decided to enact, e.g. talking to Anna. Thanks for being… gentle, I suppose? I’ve been getting a lot of flak lately, it’s nice to get some non-insulting advice sometimes. :)
(Somehow I completely failed to communicate the tentativeness of the ideas I was throwing around; in my head I was giving it about a 3% chance that I’d actually work on helping build an organization but I seem to have given off an impression of about 30%. I think this caused everyone’s brains to enter politics mode, which is not a good mode for brains to be in.)
It’s rather strange how the SIAI is secretive. The military projects are secretive, the commercial projects are secretive alas: so few value transparency. An open project would surely do better, through being more obviously trustworthy and accountable, being better able to use talent across the internet, etc. I figure if the SIAI persists in not getting to grips with this issue, some other organisation will.
Could you tell us about them?