Now here’s Bob. He’s been created-by-Joe, and given this wonderful machine, and this choice. And let’s be clear: he’s going to choose joy. I pre-ordained it. So is he a slave? No. Bob is as free as any of us. The fact that the causal history of his existence, and his values, includes not just “Nature,” but also the intentional choices of other agents to create an agent-like-him, makes no difference to his freedom. It’s all Nature, after all.
Here’s an alternative perspective that looks like a plausible contender to me.
If Bob identifies with his algorithm rather than with physics (c.f. this exchange on decision theory), and he’s faced with the choice between paperclips and joy, then you could distinguish between cases where:
Bob was selected to be in charge of that choice by a process that would only pick an algorithm if it was going to choose joy.
Bob was selected to be in charge of that choice by a process that’s indifferent to the output that the selected algorithm makes.
(In order to make sure that the chooser always has an option to pick an algorithm that chooses joy, let’s extend your thought experiment so that the creator has millions of options — not just Alice and Bob.)
In the former case, I think you could say that Bob can’t change whether X or Y gets chosen. (Because if Bob were to choose paperclips, then he would never have received the choice in the first place.) Notably, though, Bob can affect whether he gets physically instantiated and put in charge of the decision between joy and paperclips. (By choosing joy, and thereby making himself available as a candidate.)
On this perspective, the relevant difference wouldn’t be “created by nature” vs. “created by agents”. Nature could (in principle) create someone via a process that exerts extremely strong selection pressure on that agent’s choice in a particular dilemma, thereby eliminating that agent’s own freedom to choose its output, there. And conversely, an agent could choose who to create based on some qualitites other than what they’d choose in a particular dilemma — leaving their created agent free to decide on that dilemma, on their own.
Here’s an alternative perspective that looks like a plausible contender to me.
If Bob identifies with his algorithm rather than with physics (c.f. this exchange on decision theory), and he’s faced with the choice between paperclips and joy, then you could distinguish between cases where:
Bob was selected to be in charge of that choice by a process that would only pick an algorithm if it was going to choose joy.
Bob was selected to be in charge of that choice by a process that’s indifferent to the output that the selected algorithm makes.
(In order to make sure that the chooser always has an option to pick an algorithm that chooses joy, let’s extend your thought experiment so that the creator has millions of options — not just Alice and Bob.)
In the former case, I think you could say that Bob can’t change whether X or Y gets chosen. (Because if Bob were to choose paperclips, then he would never have received the choice in the first place.) Notably, though, Bob can affect whether he gets physically instantiated and put in charge of the decision between joy and paperclips. (By choosing joy, and thereby making himself available as a candidate.)
On this perspective, the relevant difference wouldn’t be “created by nature” vs. “created by agents”. Nature could (in principle) create someone via a process that exerts extremely strong selection pressure on that agent’s choice in a particular dilemma, thereby eliminating that agent’s own freedom to choose its output, there. And conversely, an agent could choose who to create based on some qualitites other than what they’d choose in a particular dilemma — leaving their created agent free to decide on that dilemma, on their own.