Yes but are you say able to model out all the steps and assess if it’s wise? Like you want to keep a loved one alive. Overthrowing the government in a coup is theoretically possible, and then you as dictator spend all tax dollars on medical research.
But the probability of success is so low, and many of those futures get you and that loved one killed through reprisal. Can you assess the likelihood of a path over years with thousands of steps?
Or just do only greedy near term actions.
Of course, power seeking behavior is kinda incremental. You don’t have to plan 80 years into the future. If you get power and money now, you can buy Cady bandaids, etc. Get richer and you can hire bodyguards. And so on—you get immediate near term benefits with each power increase.
A greedy strategy would work actually. The issue becomes when there is a choice to break the rules. Do you evade taxes or steal money? These all have risks. Once you’re a billionaire and have large resources, do you start illegally making weapons? There are all these branch points where if the risk of getting caught is high, the AI won’t do these things.
Yes but are you say able to model out all the steps and assess if it’s wise? Like you want to keep a loved one alive. Overthrowing the government in a coup is theoretically possible, and then you as dictator spend all tax dollars on medical research.
But the probability of success is so low, and many of those futures get you and that loved one killed through reprisal. Can you assess the likelihood of a path over years with thousands of steps?
Or just do only greedy near term actions.
Of course, power seeking behavior is kinda incremental. You don’t have to plan 80 years into the future. If you get power and money now, you can buy Cady bandaids, etc. Get richer and you can hire bodyguards. And so on—you get immediate near term benefits with each power increase.
A greedy strategy would work actually. The issue becomes when there is a choice to break the rules. Do you evade taxes or steal money? These all have risks. Once you’re a billionaire and have large resources, do you start illegally making weapons? There are all these branch points where if the risk of getting caught is high, the AI won’t do these things.