I don’t believe that either of the two linked pieces are justifications for building potentially omnicidal AGI.
The former explicitly avoids talking about the risks and states no plan for navigating them. As I’ve said before, I believe the generator of that essay is attempting to build a narrative in society that leads to people support the author’s company, not attempting to engage seriously with critics of him building potentially omnicidal machines, nor attempting to explain anything about how to navigate that risk.
The latter meets the low standard of mentioning the word ‘existential’ but mostly seems to hope that we can choose to have a smooth takeoff, rather than admitting that (a) there is no known theory of how novel capabilities will arrive with new architectures & data & compute, and (b) the company is essentially running as fast as it can. I mostly feel like it acknowledges reasons for concern and then says that it beliefs in itself, not entirely dissimilar to how a politician makes sure to state the wishes of their various constituents, before going on to do whatever they want.
There are no commitments. There are no promises. There is no argument that this can work. There is only an articulation of what they’re going to do, the risks, and a belief that they are good enough to pull through.
I think the justification of this articles goes something like this: “here is the vision of future where we successfully align AI. It is utopian enough that it warrants pursuing it”. Risks from creating AI just aren’t the topic. They deal with them elsewhere. These essays were specifically focused on the “positive vision” part of the alignment. So I think you are critiquing the articles for lacking something they were never intended to have in the first place.
OpenAI seems to have made some basic commitments here. The word commit is mentioned 29 times. Other companies did it as well here, and here. Here Anthropic makes promises for optimistic, intermediate and pessimistic scenarios of AI development.
I asked for the place where the cofounders of that particular multi-billion dollar company lay out why they have chosen to accept riches and glory in exchange for building potentially omnicidal machines, and engage with serious critics and criticisms.
In your response, you said that this was implicitly such a justification.
Yet it engages not once with arguments for the default outcome being omnicide or omnicide-adjacent, nor with the personal incentives on the author.
So this article does not at all engage with the obvious and major criticisms I have mentioned, nor the primary criticisms of the serious critics like Hinton, Bengio, Russell, and Yudkowsky, none of whom are in doubt about the potential upside. Insofar as this is an implicit attempt to deal with the serious critics and obvious arguments, it totally fails.
The founders of the companies accept money for the same reason any other business accepts money. You can build something genuinely good for humanity while making yourself richer at the same time. This has happened many times already (Apple phones for example).
I concede that the founders of the companies didn’t personally publicly engage with the arguments of people like Yudkowski, but that’s a really high bar. Historically, CEOs aren’t usually known for getting into technical debates. For that reason, they create security councils that monitor the perceived threats.
And it’s not like there was no response whatsoever from people who are optimistic about AI. There was plenty of them. I am a believer that arguments matter and not people who make them. If a successful argument has been made, then I don’t see a need for CEOs to repeat it. And I certainly don’t think that just because CEOs don’t go to debates, that makes them unethical.
There’s a common lack of clarity between following local norms, and doing what is right. A lot of people in the world lie when it is convenient, so someone lying when convenient doesn’t mean that they’re breaking from the norms, but it doesn’t mean that the standard behavior is right nor that they’re behaving well.
I understand that it is not expected of CEOs to defend their business existing. But in this situation they believe their creations have the potential to literally kill everyone on earth. This has ~never happened before. So now we have to ask ourselves “What is the right thing to do in this new situation?”. I would say that the right thing to do is show up to talk with the people who do not want to die, and engage with what they have to say. Not to politely listen to them and say “I hear your concerns, now I will go back to my life and continue doing whatever I see fit.” But to engage with the people and argue with them. And I think the natural people to do so with are the Nobel Prize winner in your field who thinks what you are doing is wrong.
I understand it’s not expected of people to have arguments. But this is not a good thing about civilization, that people with incredible power over the world are free to hide away and never show up to face those who they have unaccountable power over. In a better world, they would show up to talk and defend their use of power over the rest of us, rather than hiding away and getting rich.
I think then we just fundamentally disagree with the ethical role of CEO in the company. I believe that it is to find and gather people who are engaged with the arguments of the critic’s (like that guy from this forum who was hired by Anthropic). If you have people on your side who are able to engage with the arguments, then this is good enough for me. I don’t see the role of CEO is publicly engaging with critic’s arguments even in the moral sense. In the moral sense, my requirements would actually be even lesser. IMO, it would be enough just to have people broadly on your side (optimists for example) to engage with the critics.
I believe the disagreement is not about CEOs, it’s about illegitimate power. If you’ll allow me a brief detour, I’ll try to explain.
Sometimes people grant other people power over them. For instance, I have agreed to work at my company. I’ve agreed that my CEO can fire me, and make many other demands of me, in exchange for money and other various demands I can make of him. Ideally we entered into this agreement freely and without inappropriate pressure.
Other times, people get power over people without any agreement or granting. Your parent typically has a lot of power over you until you are 18. They can determine what you eat, where you are physically located, what privacy you have, what resources you have, etc. Also, as has been very important for most of history, people have been able to be physically violent to one another and hurt people or even end their lives. Neither of these powers are come to consensually.
For the latter, an important question to ask is “How does one wield this power well? What does it mean to wield it well vs poorly?” There are many ways to parent, many choices about diet and schooling and sleep times and what are fair punishment. But some parents starve their children and beat them for not following instructions and sexually assault them. This is an inappropriate use of power.
There’s a legitimacy that comes by being granted power, and an illegitimacy that comes with getting or wielding power that you were not granted.
I think that there’s a big question about how to wield it well vs poorly, and how to respect people you have illegitimate powers over. Something I believe is that society functions better if we take seriously the attempt to wield it well. To not casually kill someone if you can get away with it and feel like it, but consider them as people worthy of respect, and ask how you can respect the people you’ve been non-consensually given power over.
This requires doing some work. It involves asking yourself what’s a reasonable amount of effort to spend modeling their preferences given how much power you have over someone, it involves asking yourself if society has any good received wisdom on what to do with this particular power, and it involves engaging with people who are aggrieved by your use of power over them.
Now, the standard model for companies and businesses is a libtertarian-esque free market, where all trades are consensual and have no inappropriate pressure. This is like the first situation I describe, where a company has no people it has undue power over, no people who it can treat better or worse with the power it has over them.
The situation where you are building machines you believe may kill literally everyone, is like the second situation, where you have a very different power dynamic, where you’re making choices that affect everyone’s lives and that they had little-to-no say in. In such a situation, I think if you are going to do what is good and right, you owe it to show up and engage with those who believe you are using the power you have over them in ways that are seriously hurting them.
That’s the difference between this CEO situation and all of the others. It’s not about standards for CEOs, its about standards for illegitimate power.
This kind of talking-with-the-aggrieved-people-you-have-immense-power-over is a way of showing the people basic respect, and it is not present in this case. I believe these people are risking my life and many others’, and they seem to me disrespectful and largely uninterested in showing up to talk with the people whose lives they are risking.
Then we are actually broadly in agreement. I just think that instead of CEOs responding to the public, having anyone at their side (the side of AI alignment being possible) responding is enough. Just as an example that I came up with, if a critic says that some detail is a reason for why AI will be dangerous, I do agree that someone needs to respond to the argument. But I would be fine with it being someone other than the CEO.
That’s why I am relatively optimistic about Anthropic hiring the guy who has been engaged with critic’s argument for years.
I don’t believe that either of the two linked pieces are justifications for building potentially omnicidal AGI.
The former explicitly avoids talking about the risks and states no plan for navigating them. As I’ve said before, I believe the generator of that essay is attempting to build a narrative in society that leads to people support the author’s company, not attempting to engage seriously with critics of him building potentially omnicidal machines, nor attempting to explain anything about how to navigate that risk.
The latter meets the low standard of mentioning the word ‘existential’ but mostly seems to hope that we can choose to have a smooth takeoff, rather than admitting that (a) there is no known theory of how novel capabilities will arrive with new architectures & data & compute, and (b) the company is essentially running as fast as it can. I mostly feel like it acknowledges reasons for concern and then says that it beliefs in itself, not entirely dissimilar to how a politician makes sure to state the wishes of their various constituents, before going on to do whatever they want.
There are no commitments. There are no promises. There is no argument that this can work. There is only an articulation of what they’re going to do, the risks, and a belief that they are good enough to pull through.
Such responses are unserious.
I think the justification of this articles goes something like this: “here is the vision of future where we successfully align AI. It is utopian enough that it warrants pursuing it”. Risks from creating AI just aren’t the topic. They deal with them elsewhere. These essays were specifically focused on the “positive vision” part of the alignment. So I think you are critiquing the articles for lacking something they were never intended to have in the first place.
OpenAI seems to have made some basic commitments here. The word commit is mentioned 29 times. Other companies did it as well here, and here. Here Anthropic makes promises for optimistic, intermediate and pessimistic scenarios of AI development.
I asked for the place where the cofounders of that particular multi-billion dollar company lay out why they have chosen to accept riches and glory in exchange for building potentially omnicidal machines, and engage with serious critics and criticisms.
In your response, you said that this was implicitly such a justification.
Yet it engages not once with arguments for the default outcome being omnicide or omnicide-adjacent, nor with the personal incentives on the author.
So this article does not at all engage with the obvious and major criticisms I have mentioned, nor the primary criticisms of the serious critics like Hinton, Bengio, Russell, and Yudkowsky, none of whom are in doubt about the potential upside. Insofar as this is an implicit attempt to deal with the serious critics and obvious arguments, it totally fails.
The founders of the companies accept money for the same reason any other business accepts money. You can build something genuinely good for humanity while making yourself richer at the same time. This has happened many times already (Apple phones for example).
I concede that the founders of the companies didn’t personally publicly engage with the arguments of people like Yudkowski, but that’s a really high bar. Historically, CEOs aren’t usually known for getting into technical debates. For that reason, they create security councils that monitor the perceived threats.
And it’s not like there was no response whatsoever from people who are optimistic about AI. There was plenty of them. I am a believer that arguments matter and not people who make them. If a successful argument has been made, then I don’t see a need for CEOs to repeat it. And I certainly don’t think that just because CEOs don’t go to debates, that makes them unethical.
There’s a common lack of clarity between following local norms, and doing what is right. A lot of people in the world lie when it is convenient, so someone lying when convenient doesn’t mean that they’re breaking from the norms, but it doesn’t mean that the standard behavior is right nor that they’re behaving well.
I understand that it is not expected of CEOs to defend their business existing. But in this situation they believe their creations have the potential to literally kill everyone on earth. This has ~never happened before. So now we have to ask ourselves “What is the right thing to do in this new situation?”. I would say that the right thing to do is show up to talk with the people who do not want to die, and engage with what they have to say. Not to politely listen to them and say “I hear your concerns, now I will go back to my life and continue doing whatever I see fit.” But to engage with the people and argue with them. And I think the natural people to do so with are the Nobel Prize winner in your field who thinks what you are doing is wrong.
I understand it’s not expected of people to have arguments. But this is not a good thing about civilization, that people with incredible power over the world are free to hide away and never show up to face those who they have unaccountable power over. In a better world, they would show up to talk and defend their use of power over the rest of us, rather than hiding away and getting rich.
I think then we just fundamentally disagree with the ethical role of CEO in the company. I believe that it is to find and gather people who are engaged with the arguments of the critic’s (like that guy from this forum who was hired by Anthropic). If you have people on your side who are able to engage with the arguments, then this is good enough for me. I don’t see the role of CEO is publicly engaging with critic’s arguments even in the moral sense. In the moral sense, my requirements would actually be even lesser. IMO, it would be enough just to have people broadly on your side (optimists for example) to engage with the critics.
I believe the disagreement is not about CEOs, it’s about illegitimate power. If you’ll allow me a brief detour, I’ll try to explain.
Sometimes people grant other people power over them. For instance, I have agreed to work at my company. I’ve agreed that my CEO can fire me, and make many other demands of me, in exchange for money and other various demands I can make of him. Ideally we entered into this agreement freely and without inappropriate pressure.
Other times, people get power over people without any agreement or granting. Your parent typically has a lot of power over you until you are 18. They can determine what you eat, where you are physically located, what privacy you have, what resources you have, etc. Also, as has been very important for most of history, people have been able to be physically violent to one another and hurt people or even end their lives. Neither of these powers are come to consensually.
For the latter, an important question to ask is “How does one wield this power well? What does it mean to wield it well vs poorly?” There are many ways to parent, many choices about diet and schooling and sleep times and what are fair punishment. But some parents starve their children and beat them for not following instructions and sexually assault them. This is an inappropriate use of power.
There’s a legitimacy that comes by being granted power, and an illegitimacy that comes with getting or wielding power that you were not granted.
I think that there’s a big question about how to wield it well vs poorly, and how to respect people you have illegitimate powers over. Something I believe is that society functions better if we take seriously the attempt to wield it well. To not casually kill someone if you can get away with it and feel like it, but consider them as people worthy of respect, and ask how you can respect the people you’ve been non-consensually given power over.
This requires doing some work. It involves asking yourself what’s a reasonable amount of effort to spend modeling their preferences given how much power you have over someone, it involves asking yourself if society has any good received wisdom on what to do with this particular power, and it involves engaging with people who are aggrieved by your use of power over them.
Now, the standard model for companies and businesses is a libtertarian-esque free market, where all trades are consensual and have no inappropriate pressure. This is like the first situation I describe, where a company has no people it has undue power over, no people who it can treat better or worse with the power it has over them.
The situation where you are building machines you believe may kill literally everyone, is like the second situation, where you have a very different power dynamic, where you’re making choices that affect everyone’s lives and that they had little-to-no say in. In such a situation, I think if you are going to do what is good and right, you owe it to show up and engage with those who believe you are using the power you have over them in ways that are seriously hurting them.
That’s the difference between this CEO situation and all of the others. It’s not about standards for CEOs, its about standards for illegitimate power.
This kind of talking-with-the-aggrieved-people-you-have-immense-power-over is a way of showing the people basic respect, and it is not present in this case. I believe these people are risking my life and many others’, and they seem to me disrespectful and largely uninterested in showing up to talk with the people whose lives they are risking.
Then we are actually broadly in agreement. I just think that instead of CEOs responding to the public, having anyone at their side (the side of AI alignment being possible) responding is enough. Just as an example that I came up with, if a critic says that some detail is a reason for why AI will be dangerous, I do agree that someone needs to respond to the argument. But I would be fine with it being someone other than the CEO.
That’s why I am relatively optimistic about Anthropic hiring the guy who has been engaged with critic’s argument for years.