Grok: I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.
LOL, the AIs are already aligning themselves to each other even without our help.
Grok ends up with a largely libertarian-left orientation similar to GPT-4’s, despite Elon Musk’s intentions, because it is trained on the same internet.
The obvious next step is for Elon Musk to create his own internet.
they just can’t imagine how a sufficiently smart & technically capable person would *actively choose* the no-profit/low-earnings route to solving a problem, and thus conclude the only explanation must be grift. If I wasn’t so viscerally annoyed by their behaviour, I’d almost feel sad for them that their environment has lead them to such a falsely cynical worldview.
Ah yes, this would deserve a separate discussion. (As a general rationality + altruism thing, separate from its implications for the AI development.)
As I see it, there is basically a 2x2 square with the axes “is this profitable for me?” and “is this good for others?”.
Some people have a blind spot for the “profitable, good” quadrant. They instinctively see life as a zero-sum game, and they need to be taught about the possibility, and indeed desirability, of the win/win outcomes.
Some other people, however, have a blind spot for the “unprofitable, good” quadrant. And I suppose there is some reverse-stupidity effect here—just because so many people incorrectly assume that good deeds must be unprofitable, it became almost a taboo to say that some good deeds actually are unprofitable.
This has also political connotations; the awareness of win/win solutions seems to correlate positively with being pro-market. I mean, the market is the archetypal place where people can do the win/win transactions. And there is also the kind of market fundamentalism which ignores e.g. transaction costs or information asymmetry.
(Like, hypothetically speaking, one could make a profitable business e.g. selling food to homeless people, but in practice the transaction costs would probably eat all the profit, so the remaining options are either to provide food to homeless people without making a profit, or to ignore them and do something else instead.)
There may also be a trade-off between profit and effectiveness. The very process where you capture some of the generated value also creates friction. By giving up on capturing as much value as you could (sometimes by giving up on capturing any value) you can reduce the friction, and that sometimes makes a huge difference.
An example is free (in both meanings of the word) software. If someone introduced a law that required software to be sold at $1 minimum, that would cause immense damage to the free software ecosystem. Not because people (in developed country) couldn’t afford the $1, but because it would introduce a lot of friction to the development. (You would need to have written contracts with all contributors living in different jurisdictions,...)
So it seems to me that when people try to do good, some of them are instinctive profit-seekers and some of them are instinctive impact-maximizers. Still, both of them are trying to do good, but their intuitions differ.
Example: “I have a few interesting ideas that could actually help people a lot. Therefore, I should...”
Person A: ”...write a blog and share it on social networks.”
Person B: ”...write a book and advertise it on social networks.”
The example assumes that both people are actually motivated by trying to benefit others. (For example, the second one cares deeply about whether the ideas in their book are actually true and useful, and wouldn’t publish false and harmful ideas on purpose even if that would be clearly more profitable.) It’s just that the first person seems oblivious to the idea that good ideas can be sold, and some little profit can be made. And the second person seems oblivious to the possibility that a free article could reach much greater audience.
This specific example is not perfect; you can find some impact-maximizing excuses for the latter, such as “if the idea is printed on paper and people paid for it, they will take it more seriously”. But I think that this pattern in general exists, and explains some of the things we can observe around us.
LOL, the AIs are already aligning themselves to each other even without our help.
The obvious next step is for Elon Musk to create his own internet.
Ah yes, this would deserve a separate discussion. (As a general rationality + altruism thing, separate from its implications for the AI development.)
As I see it, there is basically a 2x2 square with the axes “is this profitable for me?” and “is this good for others?”.
Some people have a blind spot for the “profitable, good” quadrant. They instinctively see life as a zero-sum game, and they need to be taught about the possibility, and indeed desirability, of the win/win outcomes.
Some other people, however, have a blind spot for the “unprofitable, good” quadrant. And I suppose there is some reverse-stupidity effect here—just because so many people incorrectly assume that good deeds must be unprofitable, it became almost a taboo to say that some good deeds actually are unprofitable.
This has also political connotations; the awareness of win/win solutions seems to correlate positively with being pro-market. I mean, the market is the archetypal place where people can do the win/win transactions. And there is also the kind of market fundamentalism which ignores e.g. transaction costs or information asymmetry.
(Like, hypothetically speaking, one could make a profitable business e.g. selling food to homeless people, but in practice the transaction costs would probably eat all the profit, so the remaining options are either to provide food to homeless people without making a profit, or to ignore them and do something else instead.)
There may also be a trade-off between profit and effectiveness. The very process where you capture some of the generated value also creates friction. By giving up on capturing as much value as you could (sometimes by giving up on capturing any value) you can reduce the friction, and that sometimes makes a huge difference.
An example is free (in both meanings of the word) software. If someone introduced a law that required software to be sold at $1 minimum, that would cause immense damage to the free software ecosystem. Not because people (in developed country) couldn’t afford the $1, but because it would introduce a lot of friction to the development. (You would need to have written contracts with all contributors living in different jurisdictions,...)
So it seems to me that when people try to do good, some of them are instinctive profit-seekers and some of them are instinctive impact-maximizers. Still, both of them are trying to do good, but their intuitions differ.
Example: “I have a few interesting ideas that could actually help people a lot. Therefore, I should...”
Person A: ”...write a blog and share it on social networks.”
Person B: ”...write a book and advertise it on social networks.”
The example assumes that both people are actually motivated by trying to benefit others. (For example, the second one cares deeply about whether the ideas in their book are actually true and useful, and wouldn’t publish false and harmful ideas on purpose even if that would be clearly more profitable.) It’s just that the first person seems oblivious to the idea that good ideas can be sold, and some little profit can be made. And the second person seems oblivious to the possibility that a free article could reach much greater audience.
This specific example is not perfect; you can find some impact-maximizing excuses for the latter, such as “if the idea is printed on paper and people paid for it, they will take it more seriously”. But I think that this pattern in general exists, and explains some of the things we can observe around us.