End-state-focused Value-aligned Highly-Effective-Optimizer AI Agents (hereafter Powerful AI Agents) with the primary value of win-at-all-costs are extremely dangerous.
I think the disagreement point is this:
Some say that intelligent agents being dangerously powerful is good actually, and won’t be a risky situation
I have several cruxes/reasons for why I disagree:
I think I have quite a lot less probability on unrestrained AI conflict than @tailcalled does.
I disagree with the assumption of this:
But nobody has come up with an acceptable end goal for the world, because any goal we can come up with tends to want to consume everything, which destroys humanity.
Because I think that a lot of the reason the search came up empty is because people were attempting to solve the problem too far ahead of the actual risk.
Also, I think it matters a lot which specific utility function matters here, analogously to how picking a specific number or function from a variable N tells you a lot more about what will happen than just reasoning about the variable in the abstract.
3. I think the world isn’t as offense-dominant as LWers/EAs tend to think, and that while some level of offense outpacing defense is quite plausible, I don’t think it’s as extreme as “defense is essentially pointless.”
I just want to say that I am someone who is afraid that the world is currently in a very offense-dominant strategic position currently, but I don’t think defense is pointless at all. I think it’s quite tractable and should be heavily invested in! Let’s get some d/acc going people!
In fact, a lot of my hope for good outcomes for the future route through Good Actors (probably also making a good profit) using powerful tool-AI to do defensive acceleration of R&D in a wide range of fields. Automated Science-for-Good, including automated alignment research. Getting there without the AI causing catastrophe in the meantime is a challenge, but not an intractable one.
As someone who does disagree with this:
I think the disagreement point is this:
I have several cruxes/reasons for why I disagree:
I think I have quite a lot less probability on unrestrained AI conflict than @tailcalled does.
I disagree with the assumption of this:
Because I think that a lot of the reason the search came up empty is because people were attempting to solve the problem too far ahead of the actual risk.
Also, I think it matters a lot which specific utility function matters here, analogously to how picking a specific number or function from a variable N tells you a lot more about what will happen than just reasoning about the variable in the abstract.
3. I think the world isn’t as offense-dominant as LWers/EAs tend to think, and that while some level of offense outpacing defense is quite plausible, I don’t think it’s as extreme as “defense is essentially pointless.”
I just want to say that I am someone who is afraid that the world is currently in a very offense-dominant strategic position currently, but I don’t think defense is pointless at all. I think it’s quite tractable and should be heavily invested in! Let’s get some d/acc going people!
In fact, a lot of my hope for good outcomes for the future route through Good Actors (probably also making a good profit) using powerful tool-AI to do defensive acceleration of R&D in a wide range of fields. Automated Science-for-Good, including automated alignment research. Getting there without the AI causing catastrophe in the meantime is a challenge, but not an intractable one.