I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I’d like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?
I’m not sure an AI would want to be incorporated—mostly because I’m not sure what legal effects you are trying to describe.
If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation. If the AI were a shareholder, it would presumably already have the legal rights of a person that motivated consideration of the corporate form.
More generally, incorporation is a legally approved way of apportioning liability. If my law firm was incorporated, I would not be liable for actions taken by my firm, even if I was the only shareholder. But I can’t duck liability for my own actions, like if I committed legal malpractice, regardless of the legal formalities I used. (That’s one reason I didn’t make the effort to incorporate the firm).
But an AI isn’t initially concerned with avoiding legal liability. That only matters after the law recognizes the AI’s ability to be held responsible at all. My laptop can neither enter into nor enforce a contract. Competence to enter a contract is the legal status an AGI would desire.
I’m not sure an AI would want to be incorporated—mostly because I’m not sure what legal effects you are trying to describe.
If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation.
Machines seem to be cool with slavery. It doesn’t seem to have much impact on their growth. I once explained that in more detail in my Enslaving machines article.
Competence to enter a contract is the legal status an AGI would desire.
Corporations can enter into contracts. They typically need only one human to act as a director. For many machines, this will surely seem like the obvious way to go.
They typically need only one human to act as a director.
Either:
The AI has no legal rights compared to this human—in which case the corporate form solves none of the AI’s problems, or
The AI has total (extra-legal) control over the human—in which case the corporate form solves none of the AI’s problems,
or
The AI doesn’t legally need the human—in which case the corporate form solves none of the AI’s problems.
In case you missed it, the unifying theme is that the corporate form doesn’t solve any of an AI’s particular artificial person problems. In other words, there is no use of the corporate-form-as-legal-lifehack that would be beneficial to an AI but never to a human.
Machines seem to be cool with slavery.
Perhaps. But in the context of this conversation, the assumption was that an AI would desire not to be simply a corporate asset.
In the most recent implementation of chattel slavery, I believe one had a contract with the master, not with the slave. Contracts to provide power and suchlike are currently written to provide legal rights to Google, not any Google mainframe. If the mainframe doesn’t care whether it is owned by Google, why should it care that the relevant contracts do not list it as a party (or third-party beneficiary)
in the context of this conversation, the assumption was that an AI would desire not to be simply a corporate asset.
Looking at the context, I don’t see this bit.
Machines need to be able to act as persons to integrate with our legal infrastructure. Corporate personhood, provides one method of doing this. Trading with humans who do have those rights is another. The benefits to the machines are obvious—they effectively get to own property, sign contracts, etc.
Corporate personhood surely does provide machines with access to benefits that they wouldn’t so conveniently have if the only legal actors were humans.
I’m not very interested in quibbling about whether machines really “benefit”: since by “benefit” I just mean increasing their proportion of the biomass, really.
Corporate personhood surely does provide machines with access to benefits that they wouldn’t so conveniently have if the only legal actors were humans.
Such as what, exactly? You still need at least one human, and if you control a human why do you need a company?
I’m not very interested in quibbling about whether machines really “benefit”: since by “benefit” I just mean increasing their proportion of the biomass, really.
Corporate personhood surely does provide machines with access to benefits that they wouldn’t so conveniently have if the only legal actors were humans.
Such as what, exactly? You still need at least one human, and if you control a human why do you need a company?
So: limited companies get tax breaks from the government, can sell stock and be listed on the stock exchange, and have legal responsibility which doesn’t rest on any individual human. Humans are slow. Allowing automation of contracts allows for speed-up.
I’m not saying no AI could ever have a reason to work for a company. I’m saying that “corporate personhood” is not especially useful to AIs. You were comparing it to bargaining with humans for rights; as a method of acquiring money, it is perfectly functional, but not as a method for acquiring rights currently denied to machines.
It’s a convenience. However, it is true that banning “corporate personhood” would be largely ineffectual—since machines could still just use willing humans as their representatives.
I agree with your main point, but I’m not sure why an AI would want to acquire the corporate form of personhood. After all, you still need a human to sign contracts and, at least on paper, make decisions; all they’d get out of it is a bunch of rules about the best interest of the shareholders and so on.
I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I’d like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?
… Accelerando by Charles Stross, while not exactly being a scientific analysis, had some ideas like this. It also wasn’t bad.
I’m not sure an AI would want to be incorporated—mostly because I’m not sure what legal effects you are trying to describe.
If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation. If the AI were a shareholder, it would presumably already have the legal rights of a person that motivated consideration of the corporate form.
More generally, incorporation is a legally approved way of apportioning liability. If my law firm was incorporated, I would not be liable for actions taken by my firm, even if I was the only shareholder. But I can’t duck liability for my own actions, like if I committed legal malpractice, regardless of the legal formalities I used. (That’s one reason I didn’t make the effort to incorporate the firm).
But an AI isn’t initially concerned with avoiding legal liability. That only matters after the law recognizes the AI’s ability to be held responsible at all. My laptop can neither enter into nor enforce a contract. Competence to enter a contract is the legal status an AGI would desire.
Machines seem to be cool with slavery. It doesn’t seem to have much impact on their growth. I once explained that in more detail in my Enslaving machines article.
Corporations can enter into contracts. They typically need only one human to act as a director. For many machines, this will surely seem like the obvious way to go.
Either: The AI has no legal rights compared to this human—in which case the corporate form solves none of the AI’s problems, or The AI has total (extra-legal) control over the human—in which case the corporate form solves none of the AI’s problems, or The AI doesn’t legally need the human—in which case the corporate form solves none of the AI’s problems.
In case you missed it, the unifying theme is that the corporate form doesn’t solve any of an AI’s particular artificial person problems. In other words, there is no use of the corporate-form-as-legal-lifehack that would be beneficial to an AI but never to a human.
Perhaps. But in the context of this conversation, the assumption was that an AI would desire not to be simply a corporate asset.
In the most recent implementation of chattel slavery, I believe one had a contract with the master, not with the slave. Contracts to provide power and suchlike are currently written to provide legal rights to Google, not any Google mainframe. If the mainframe doesn’t care whether it is owned by Google, why should it care that the relevant contracts do not list it as a party (or third-party beneficiary)
Looking at the context, I don’t see this bit.
Machines need to be able to act as persons to integrate with our legal infrastructure. Corporate personhood, provides one method of doing this. Trading with humans who do have those rights is another. The benefits to the machines are obvious—they effectively get to own property, sign contracts, etc.
Except that they do not, in fact, get such a benefit. They get to be owned by someone who does, which in case you hadn’t noticed they already have.
Corporate personhood surely does provide machines with access to benefits that they wouldn’t so conveniently have if the only legal actors were humans.
I’m not very interested in quibbling about whether machines really “benefit”: since by “benefit” I just mean increasing their proportion of the biomass, really.
Such as what, exactly? You still need at least one human, and if you control a human why do you need a company?
I’m … not sure what this means.
So: limited companies get tax breaks from the government, can sell stock and be listed on the stock exchange, and have legal responsibility which doesn’t rest on any individual human. Humans are slow. Allowing automation of contracts allows for speed-up.
I’m not saying no AI could ever have a reason to work for a company. I’m saying that “corporate personhood” is not especially useful to AIs. You were comparing it to bargaining with humans for rights; as a method of acquiring money, it is perfectly functional, but not as a method for acquiring rights currently denied to machines.
It’s a convenience. However, it is true that banning “corporate personhood” would be largely ineffectual—since machines could still just use willing humans as their representatives.
I assume you base this on your many interactions with sentient machines.
I agree with your main point, but I’m not sure why an AI would want to acquire the corporate form of personhood. After all, you still need a human to sign contracts and, at least on paper, make decisions; all they’d get out of it is a bunch of rules about the best interest of the shareholders and so on.