Instances in history in which private companies (or any individual humans) have intentionally turned down huge profits and power are the exception, not the rule.
OpenAI wasn’t a private company (ie for-profit) at the time of the OP grant though.
I don’t think this is true. Nonprofits can aim to amass large amounts of wealth, they just aren’t allowed to distribute that wealth to its shareholders. A good chunk of obviously very wealthy and powerful companies are nonprofits.
I’m not sure if those are precisely the terms of the charter, but that’s besides the point. It is still “private” in the sense that there is a small group of private citizens who own the thing and decide what it should do with no political accountability to anyone else. As for the “non-profit” part, we’ve seen what happens to that as soon as it’s in the way.
I’m not trying to say “it’s bad to give large sums of money to any group because humans have a tendency to to seek power.”
I’m saying “you should be exceptionally cautious about giving large sums of money to a group of humans with the stated goal of constructing an AGI.”
You need to weight any reassurances they give you against two observations:
The commonly observed pattern of individual humans or organisations seeking power (and/or wealth) at the expense of the wider community.
The strong likelihood that there will be an opportunity for organisations pushing ahead with AI research to obtain incredible wealth or power.
So, it isn’t “humans seek power therefore giving any group of humans money is bad”. It’s “humans seek power” and, in the specific case of AI companies, there may be incredibly strong rewards for groups that behave in a self-interested way.
The general idea I’m working off is that you need to be skeptical of seemingly altruistic statements and commitments made by humans when there are exceptionally lucrative incentives to break these commitments at a later point in time (and limited ways to enforce the original commitment).
That seems like a valuable argument. It might be worth updating the wording under premise 2 to clarifying this? To me it reads as saying that the configuration, rather than the aim, of OpenAI was the major red flag.
OpenAI wasn’t a private company (ie for-profit) at the time of the OP grant though.
Aren’t these different things? Private yes, for profit no. It was private because it’s not like it was run by the US government.
As a non-profit it is obligated to not take opportunities to profit, unless those opportunities are part of it satisfying its altruistic mission.
I don’t think this is true. Nonprofits can aim to amass large amounts of wealth, they just aren’t allowed to distribute that wealth to its shareholders. A good chunk of obviously very wealthy and powerful companies are nonprofits.
I’m not sure if those are precisely the terms of the charter, but that’s besides the point. It is still “private” in the sense that there is a small group of private citizens who own the thing and decide what it should do with no political accountability to anyone else. As for the “non-profit” part, we’ve seen what happens to that as soon as it’s in the way.
So the argument is that Open Phil should only give large sums of money to (democratic) governments? That seems too overpowered for the OpenAI case.
I was more focused on the ‘company’ part. To my knowledge there is no such thing as a non-profit company?
This does not feel super cruxy as the the power incentive still remains.
In that case OP’s argument would be saying that donors shouldn’t give large sums of money to any sort of group of people, which is a much bolder claim
(I’m the OP)
I’m not trying to say “it’s bad to give large sums of money to any group because humans have a tendency to to seek power.”
I’m saying “you should be exceptionally cautious about giving large sums of money to a group of humans with the stated goal of constructing an AGI.”
You need to weight any reassurances they give you against two observations:
The commonly observed pattern of individual humans or organisations seeking power (and/or wealth) at the expense of the wider community.
The strong likelihood that there will be an opportunity for organisations pushing ahead with AI research to obtain incredible wealth or power.
So, it isn’t “humans seek power therefore giving any group of humans money is bad”. It’s “humans seek power” and, in the specific case of AI companies, there may be incredibly strong rewards for groups that behave in a self-interested way.
The general idea I’m working off is that you need to be skeptical of seemingly altruistic statements and commitments made by humans when there are exceptionally lucrative incentives to break these commitments at a later point in time (and limited ways to enforce the original commitment).
That seems like a valuable argument. It might be worth updating the wording under premise 2 to clarifying this? To me it reads as saying that the configuration, rather than the aim, of OpenAI was the major red flag.