If we have an AGI, it will figure out what problems we need solved and solve them. It may not beat a narrow AI (ANI) in the latter, but it will beat you in the former. You can thus save on the massive losses due to not knowing what you want, politics, not knowing how to best optimize something, etc. I doubt we’d be able to do 1% as well without an FAI as with one. That’s still a lot, but that means that a 0.1% chance of producing an FAI and a 99.9% chance of producing a UAI is better than a 100% chance of producing a whole lot of ANIs.
The latter’s usefulness (to me, that’s it) is incredibly narrow.
If we have an AGI, it will figure out what problems we need solved and solve them.
Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole ‘valuing real world’ thing into an AI, without adding any friendliness, actually restricting it’s generality when it comes to doing something useful.
Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly.
Only if his own thing isn’t also your own thing.
Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.
If we have an AGI, it will figure out what problems we need solved and solve them. It may not beat a narrow AI (ANI) in the latter, but it will beat you in the former. You can thus save on the massive losses due to not knowing what you want, politics, not knowing how to best optimize something, etc. I doubt we’d be able to do 1% as well without an FAI as with one. That’s still a lot, but that means that a 0.1% chance of producing an FAI and a 99.9% chance of producing a UAI is better than a 100% chance of producing a whole lot of ANIs.
Only if his own thing isn’t also your own thing.
Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole ‘valuing real world’ thing into an AI, without adding any friendliness, actually restricting it’s generality when it comes to doing something useful.
Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly.
Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.