They’re also putting up these strategies for public scrutiny, suggesting they’re open to changing their plans.
If you’re referring to sponsoring an internal FAI team, Luke wrote:
I don’t take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity “winning.” That is a matter for much strategic research and debate.
BTW, I wish to reinforce you for the behavior of sharing a dissenting view (rationale: view sharing should be agnostic to dissent/assent profile, but sharing a dissenting view intuitively risks negative social consequences, an effect that would be nice to neutralize), so I voted you up.
Well, that’s the Luke’s aspirations; I was referring to the work done so far. The whole enterprise has the feeling of over optimistic startup with ill defined extremely ambitious goals; those don’t have any success rate even for much much simpler goals.
Hm, most of the immediate strategies SI is considering going forward strike me as fairly general:
http://lesswrong.com/r/discussion/lw/cs6/how_to_purchase_ai_risk_reduction/
They’re also putting up these strategies for public scrutiny, suggesting they’re open to changing their plans.
If you’re referring to sponsoring an internal FAI team, Luke wrote:
BTW, I wish to reinforce you for the behavior of sharing a dissenting view (rationale: view sharing should be agnostic to dissent/assent profile, but sharing a dissenting view intuitively risks negative social consequences, an effect that would be nice to neutralize), so I voted you up.
Well, that’s the Luke’s aspirations; I was referring to the work done so far. The whole enterprise has the feeling of over optimistic startup with ill defined extremely ambitious goals; those don’t have any success rate even for much much simpler goals.