I mean it in this non-flattering sense rent-seeking.
I envision all sorts of arbitrary legal limits imposed on AIs. These limits will need people to dream them up, evangelize the needs for even more limits, and enforce the limits (likely involving creation of other ‘enforcer’ AIs). Some of the limits (early on) will be good ideas but as time goes on they will be more arbitrary and exploitable. If you want examples just think of what laws they will try to stop unfriendly AI and stop individuals from using AI to do evil (say with an advanced makerbot).
Once you have a role in the regulatory field then converting it to fun and profit is a straight forward exercise in politics. How many people are in this role is determined by how successful it is at limiting AIs.
Ah ok. I was assuming that if a singularity occurred it’d be beyond our control, and that our fate would be determined by how the AI was originally programmed. But my reason for assuming this is based on much limited information, so I don’t really know. If it were the case that people with political power control AI, then I think that you are very right.
But if you’re right and we live in a society where there is ASI level power that is controlled by people with political power… that really really scares me. My intuition is that it’d be just a matter of time before someone screws up. I’m not sure what to think of this...
I mean it in this non-flattering sense rent-seeking.
I envision all sorts of arbitrary legal limits imposed on AIs. These limits will need people to dream them up, evangelize the needs for even more limits, and enforce the limits (likely involving creation of other ‘enforcer’ AIs). Some of the limits (early on) will be good ideas but as time goes on they will be more arbitrary and exploitable. If you want examples just think of what laws they will try to stop unfriendly AI and stop individuals from using AI to do evil (say with an advanced makerbot).
Once you have a role in the regulatory field then converting it to fun and profit is a straight forward exercise in politics. How many people are in this role is determined by how successful it is at limiting AIs.
Ah ok. I was assuming that if a singularity occurred it’d be beyond our control, and that our fate would be determined by how the AI was originally programmed. But my reason for assuming this is based on much limited information, so I don’t really know. If it were the case that people with political power control AI, then I think that you are very right.
But if you’re right and we live in a society where there is ASI level power that is controlled by people with political power… that really really scares me. My intuition is that it’d be just a matter of time before someone screws up. I’m not sure what to think of this...