From what I gathered SI’s relevance rests upon an enormous conjunction of implied and a very narrow approach as solution, both of which were decided upon significant time in the past. Subsequently, truly microscopic probability of relevance is easily attained; I estimate at most 10^-20 due to multiple use of narrow guesses into a huge space of possibilities.
They’re also putting up these strategies for public scrutiny, suggesting they’re open to changing their plans.
If you’re referring to sponsoring an internal FAI team, Luke wrote:
I don’t take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity “winning.” That is a matter for much strategic research and debate.
BTW, I wish to reinforce you for the behavior of sharing a dissenting view (rationale: view sharing should be agnostic to dissent/assent profile, but sharing a dissenting view intuitively risks negative social consequences, an effect that would be nice to neutralize), so I voted you up.
Well, that’s the Luke’s aspirations; I was referring to the work done so far. The whole enterprise has the feeling of over optimistic startup with ill defined extremely ambitious goals; those don’t have any success rate even for much much simpler goals.
From what I gathered SI’s relevance rests upon an enormous conjunction of implied and a very narrow approach as solution, both of which were decided upon significant time in the past. Subsequently, truly microscopic probability of relevance is easily attained; I estimate at most 10^-20 due to multiple use of narrow guesses into a huge space of possibilities.
Hm, most of the immediate strategies SI is considering going forward strike me as fairly general:
http://lesswrong.com/r/discussion/lw/cs6/how_to_purchase_ai_risk_reduction/
They’re also putting up these strategies for public scrutiny, suggesting they’re open to changing their plans.
If you’re referring to sponsoring an internal FAI team, Luke wrote:
BTW, I wish to reinforce you for the behavior of sharing a dissenting view (rationale: view sharing should be agnostic to dissent/assent profile, but sharing a dissenting view intuitively risks negative social consequences, an effect that would be nice to neutralize), so I voted you up.
Well, that’s the Luke’s aspirations; I was referring to the work done so far. The whole enterprise has the feeling of over optimistic startup with ill defined extremely ambitious goals; those don’t have any success rate even for much much simpler goals.