So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.
Ah, so you’re concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That’s conceivable, but there are a few reasons I’m not too concerned about it.
Organizations are made mostly out of humans, and most of their agency goes through human agency, so there’s a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
That’s conceivable, but there are a few reasons I’m not too concerned about it.
Organizations are made mostly out of humans
Is Google “made mostly out of humans”? What about its huge datacenters? They are where a lot of the real work gets done—right?
It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
So, I’m not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?
If so, that’s cool, but why should rational thinkers share your lack of concern?
Ah, so you’re concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That’s conceivable, but there are a few reasons I’m not too concerned about it.
Organizations are made mostly out of humans, and most of their agency goes through human agency, so there’s a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
Is Google “made mostly out of humans”? What about its huge datacenters? They are where a lot of the real work gets done—right?
So, I’m not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?
If so, that’s cool, but why should rational thinkers share your lack of concern?
Google’s datacenters don’t have much agency. Their humans do.
No, it makes them idealistic about it.