You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.
So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.
Now, I don’t think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardware and when they attempt to rely heavily on the algorithms we do have it doesn’t always work out well for them. This seems more a statement about our current algorithms than the potential for such algorithms, however.
However, there is a lot of energy on various fronts to hinder organizations whose motivations are such that they lead to threats, and because these organizations are reliant on humans for hardware, only a small number of existential threats have been produced by such organizations. It can be argued that one of the best reasons to develop FAI is to undo these threats and to stop organizations from creating new threats of the like in the future. So I am not sure that it follows from your position that we should not be worried about the singularity.
Now, I don’t think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well.
Today’s orgaisations are surely better candidates for self-improvement of intelligence than today’s machines are.
Of course both typically depend somewhat on the surrounding infrastructure, but organisations like the US government are fairly self-sufficient—or could easily become so—whereas machines are still completely dependent on others for extended cumulative improvements..
Basically, organisations are what we have today. Future intelligent machines are likely to arise out of today’s organisations. So, these things are strongly linked together.
Yes, in some ways—assuming we are talking about a time when there are still lots of humans around—since organisations are a superset of humans and machines and so can combine the strengths of both.
No doubt eventually humans will become unemployable—but not until machines can do practically all their jobs better than them. That period covers an important era which many of us are concerned with.
Ah, I didn’t realize you were including machines here—organizations are usually assumed to be composed of people, but I suppose a GAI could count as a “person” for this purpose.
However, isn’t this dependent on the AI not going foom? Because if it does go foom, I can’t see a superintelligence remaining under any pre-singularity organization’s control.
You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.
So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.
Now, I don’t think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardware and when they attempt to rely heavily on the algorithms we do have it doesn’t always work out well for them. This seems more a statement about our current algorithms than the potential for such algorithms, however.
However, there is a lot of energy on various fronts to hinder organizations whose motivations are such that they lead to threats, and because these organizations are reliant on humans for hardware, only a small number of existential threats have been produced by such organizations. It can be argued that one of the best reasons to develop FAI is to undo these threats and to stop organizations from creating new threats of the like in the future. So I am not sure that it follows from your position that we should not be worried about the singularity.
He says he’s not worried about the singularity because he is more worried about unfriendly organizations, as that is a nearer-term issue.
Today’s orgaisations are surely better candidates for self-improvement of intelligence than today’s machines are.
Of course both typically depend somewhat on the surrounding infrastructure, but organisations like the US government are fairly self-sufficient—or could easily become so—whereas machines are still completely dependent on others for extended cumulative improvements..
Basically, organisations are what we have today. Future intelligent machines are likely to arise out of today’s organisations. So, these things are strongly linked together.
Are tomorrows’ organizations better than tomorrows’ machines? Because that’s what is under discussion here.
Yes, in some ways—assuming we are talking about a time when there are still lots of humans around—since organisations are a superset of humans and machines and so can combine the strengths of both.
No doubt eventually humans will become unemployable—but not until machines can do practically all their jobs better than them. That period covers an important era which many of us are concerned with.
Ah, I didn’t realize you were including machines here—organizations are usually assumed to be composed of people, but I suppose a GAI could count as a “person” for this purpose.
However, isn’t this dependent on the AI not going foom? Because if it does go foom, I can’t see a superintelligence remaining under any pre-singularity organization’s control.
I can’t say I’ve ever heard of that one. For example, Wikipedia has this:
If you are not considering the possibility of artifacts being components of organizations, that may explain some of the cross-talk.