I find myself confused about the operationalizations of a few things:
In a few places in the report, the term “extinction” is used and some arguments are specifically about extinction being unlikely. I put a much lower probability on human extinction than extremely bad outcomes due to AI (perhaps extinction is 5x lower probability) while otherwise having similar probabilities as the “concerned” group. So I find the focus on extinction confusing and possibly misleading.
As far as when “AI will displace humans as the primary force that determines what happens in the future”, does this include scenerios where humans defer to AI advisors that actually do represent their best interests? What about scenarios in which humans slowly self-enhance and morph into artificial intelligences? Or what about situations in which humans careful select aligned successors to control their resources which are AIs?
It feels like this question rests on a variety of complex considerations and operationalizations that seem mostly unrelated to the thing we it seems the question was trying to target: “how powerful is AI”. Thus, I find it hard to interpret the responses here.
Perhaps a more interesting questions on a similar topic could be something like:
By what point will AIs be sufficiently smart and capable that the gap in capabilities between them and currently existing humans is similar to the gap in intelligence and abilities between currently existing humans and field mice. (When we say AIs are capable of something, we mean the in principle ability to do something if all AIs worked together and we put aside intentionally imposed checks on AI power.)
Conditional on the continued existance of some civilization and this civilization wanting to harness vast amounts of energy, at what point will usefully harnessed energy in a given year be >1/100 of the sun’s yearly energy output.
I find myself confused about the operationalizations of a few things:
In a few places in the report, the term “extinction” is used and some arguments are specifically about extinction being unlikely. I put a much lower probability on human extinction than extremely bad outcomes due to AI (perhaps extinction is 5x lower probability) while otherwise having similar probabilities as the “concerned” group. So I find the focus on extinction confusing and possibly misleading.
As far as when “AI will displace humans as the primary force that determines what happens in the future”, does this include scenerios where humans defer to AI advisors that actually do represent their best interests? What about scenarios in which humans slowly self-enhance and morph into artificial intelligences? Or what about situations in which humans careful select aligned successors to control their resources which are AIs?
It feels like this question rests on a variety of complex considerations and operationalizations that seem mostly unrelated to the thing we it seems the question was trying to target: “how powerful is AI”. Thus, I find it hard to interpret the responses here.
Perhaps a more interesting questions on a similar topic could be something like:
By what point will AIs be sufficiently smart and capable that the gap in capabilities between them and currently existing humans is similar to the gap in intelligence and abilities between currently existing humans and field mice. (When we say AIs are capable of something, we mean the in principle ability to do something if all AIs worked together and we put aside intentionally imposed checks on AI power.)
Conditional on the continued existance of some civilization and this civilization wanting to harness vast amounts of energy, at what point will usefully harnessed energy in a given year be >1/100 of the sun’s yearly energy output.
This is cross posted from the EA forum and Jhrosenberg has responded there: link