Another Author’s Note to “Funk-tunul’s Legacy”; Or, A Criminal Confession
Okay, sorry, the sentence beginning with “It’s not obviously possible …” is bullshit handwaving on my part because the modeling assumptions I chose aren’t giving me the result I need to make the story come out the way I want. (Unless I made yet another algebra mistake.) But it’s almost half past one in the morning, and I’m mostly pretty happy with this post—you see the thing I’m getting at—so I’m pretty eager to shove it out the door and only make it more rigorous later if someone actually cares, because I have a lot of other ideas to write up!
Based on the quote from Jessica Taylor, it seems like the FDT agents are trying to maximize their long-term share of the population, rather than their absolute payoffs in a single generation? If I understand the model correctly, that means the FDT agents should try to maximize the ratio of FDT payoff : 9-bot payoff (to maximize the ratio of FDT:9-bot in the next generation). The algebra then shows that they should refuse to submit to 9-bots once the population of FDT agents gets high enough (Wolfram|Alpha link), without needing to drop the random encounters assumption.
It still seems like CDT agents would behave the same way given the same goals, though?
Another Author’s Note to “Funk-tunul’s Legacy”; Or, A Criminal Confession
Okay, sorry, the sentence beginning with “It’s not obviously possible …” is bullshit handwaving on my part because the modeling assumptions I chose aren’t giving me the result I need to make the story come out the way I want. (Unless I made yet another algebra mistake.) But it’s almost half past one in the morning, and I’m mostly pretty happy with this post—you see the thing I’m getting at—so I’m pretty eager to shove it out the door and only make it more rigorous later if someone actually cares, because I have a lot of other ideas to write up!
Based on the quote from Jessica Taylor, it seems like the FDT agents are trying to maximize their long-term share of the population, rather than their absolute payoffs in a single generation? If I understand the model correctly, that means the FDT agents should try to maximize the ratio of FDT payoff : 9-bot payoff (to maximize the ratio of FDT:9-bot in the next generation). The algebra then shows that they should refuse to submit to 9-bots once the population of FDT agents gets high enough (Wolfram|Alpha link), without needing to drop the random encounters assumption.
It still seems like CDT agents would behave the same way given the same goals, though?