Fair enough, but all of the examples I’d listed are reasonably well-defined problems, with reasonably well-outlined problem spaces, whose solutions appear to be, if not within reach, then at least feasible given our current level of technology. If you contrast this with the nebulous problem of FAI as lukeprog outlined it, would you not conclude that the probability of solving these less ambitious problems is much higher ? If so, then the increased probability could compensate for the relatively lower utility (even though, in absolute terms, nothing beats having your own Friendly pocket genie).
would you not conclude that the probability of solving these less ambitious problems is much higher ?
Honestly, the error bars on all of these expected-value calculations are so wide for me that they pretty much overlap. Especially when I consider that building a run-of-the-mill marginally-superhuman non-quasi-godlike AI significantly changes my expected value of all kinds of research projects, and that cheap plentiful energy changes my expected value of AI projects, and etc., so half of them include one another as factors anyway.
Fair enough, but all of the examples I’d listed are reasonably well-defined problems, with reasonably well-outlined problem spaces, whose solutions appear to be, if not within reach, then at least feasible given our current level of technology. If you contrast this with the nebulous problem of FAI as lukeprog outlined it, would you not conclude that the probability of solving these less ambitious problems is much higher ? If so, then the increased probability could compensate for the relatively lower utility (even though, in absolute terms, nothing beats having your own Friendly pocket genie).
Honestly, the error bars on all of these expected-value calculations are so wide for me that they pretty much overlap. Especially when I consider that building a run-of-the-mill marginally-superhuman non-quasi-godlike AI significantly changes my expected value of all kinds of research projects, and that cheap plentiful energy changes my expected value of AI projects, and etc., so half of them include one another as factors anyway.
So, really? I haven’t a clue.
Fair enough; I guess my error bars are just a lot narrower than yours. It’s possible I’m being too optimistic about them.