It can only be said to be powerful if it will tend to do something significant regardless of how you stop it. If what it does has anything in common, even if it’s nothing beyond “signficant”, it can be said to value that.
Actually, this is example of something incredibly irritating about this entire singularity topic: verbal sophistry of no consequence. What do you call ‘powerful’ has absolutely zero relation to anything. A powerful drill doesn’t tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.
A powerful drill doesn’t tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.
In this case, I’m defining a powerful intelligence differently. An AI that is powerful in your sense is not much of a risk. It’s basically the kind of AI we have now. It’s neither highly dangerous, nor highly useful (in a singularity-inducing sense).
Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI, and far more dangerous. That’s why it’s primarily what SIAI is worried about.
nor highly useful (in a singularity-inducing sense).
I’m not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc.
Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI,
Effective at what? Would it cure cancer sooner? I doubt so. An “AGI” with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations. Who would I rather hire: impartial math genius that solves the tasks you specify for him, or a brilliant murderous sociopath hell bent on doing his own thing? The latter’s usefulness (to me, that’s it) is incredibly narrow.
and far more dangerous.
Besides being effective at being worse than useless?
That’s why it’s primarily what SIAI is worried about.
I’m not quite sure that there’s ‘why’ and ‘what’ in that ‘worried’.
If we have an AGI, it will figure out what problems we need solved and solve them. It may not beat a narrow AI (ANI) in the latter, but it will beat you in the former. You can thus save on the massive losses due to not knowing what you want, politics, not knowing how to best optimize something, etc. I doubt we’d be able to do 1% as well without an FAI as with one. That’s still a lot, but that means that a 0.1% chance of producing an FAI and a 99.9% chance of producing a UAI is better than a 100% chance of producing a whole lot of ANIs.
The latter’s usefulness (to me, that’s it) is incredibly narrow.
If we have an AGI, it will figure out what problems we need solved and solve them.
Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole ‘valuing real world’ thing into an AI, without adding any friendliness, actually restricting it’s generality when it comes to doing something useful.
Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly.
Only if his own thing isn’t also your own thing.
Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.
It can only be said to be powerful if it will tend to do something significant regardless of how you stop it. If what it does has anything in common, even if it’s nothing beyond “signficant”, it can be said to value that.
Actually, this is example of something incredibly irritating about this entire singularity topic: verbal sophistry of no consequence. What do you call ‘powerful’ has absolutely zero relation to anything. A powerful drill doesn’t tend to do something significant regardless of how you stop it. Neither does powerful computer. Nor should powerful intelligence.
In this case, I’m defining a powerful intelligence differently. An AI that is powerful in your sense is not much of a risk. It’s basically the kind of AI we have now. It’s neither highly dangerous, nor highly useful (in a singularity-inducing sense).
Building an AGI may not be feasible. If it is, it will be far more effective than a narrow AI, and far more dangerous. That’s why it’s primarily what SIAI is worried about.
I’m not clear what we mean by singularity here. If we had an algorithm that works on well defined problems we could solve practical problems. edit: Like improving that algorithm, mind uploading, etc.
Effective at what? Would it cure cancer sooner? I doubt so. An “AGI” with a goal it wants to do, resisting any control, is a much more narrow AI than the AI that basically solves systems of equations. Who would I rather hire: impartial math genius that solves the tasks you specify for him, or a brilliant murderous sociopath hell bent on doing his own thing? The latter’s usefulness (to me, that’s it) is incredibly narrow.
Besides being effective at being worse than useless?
I’m not quite sure that there’s ‘why’ and ‘what’ in that ‘worried’.
If we have an AGI, it will figure out what problems we need solved and solve them. It may not beat a narrow AI (ANI) in the latter, but it will beat you in the former. You can thus save on the massive losses due to not knowing what you want, politics, not knowing how to best optimize something, etc. I doubt we’d be able to do 1% as well without an FAI as with one. That’s still a lot, but that means that a 0.1% chance of producing an FAI and a 99.9% chance of producing a UAI is better than a 100% chance of producing a whole lot of ANIs.
Only if his own thing isn’t also your own thing.
Only a friendly AGI would. The premise for funding to SI is not that they will build friendly AGI. The premise is that there is an enormous risk that someone else would for no particular reason add this whole ‘valuing real world’ thing into an AI, without adding any friendliness, actually restricting it’s generality when it comes to doing something useful.
Ultimately, the SI position is: input from us the idea guys with no achievements (outside philosophy), are necessary for the team competent enough to build a full AGI, to not kill everyone, and therefore you should donate (Previously, the position was you should donate so we build FAI before someone builds UFAI, but Luke Muehlhauser been generalizing to non-FAI solutions). That notion is rendered highly implausible when you pin down the meaning of AGI, as we did in this discourse. For the UFAI to happen and kill everyone, a potentially vastly more competent and intelligent team that SI has to fail spectacularly.
Will require simulation of me or a brain implant that effectively makes it extension of me. Do not want the former, and the latter is IA.