My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.
There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren’t really any objective intentional “decisions”, only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn’t matter though, because you have no choice but to feel that decision theories are important.
So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.
Okay, so now that I’ve had more time to think about it, I do really like the idea of thinking of “decisions” as the subjective expression of what it feels like to learn what universe you are in, and this holds true for the third-person perspective of considering the “decisions” of others: they still go through the whole process that feels from the inside like choosing or deciding, but from the outside there is no need to appeal to this to talk about “decisions”. Instead, to the outside observers, “decisions” are just resolutions of uncertainty about what will happen to a part of the universe modeled as another agent.
This seems quite elegant for my purposes, as I don’t run into the problems associated with formalizing UDT (at least, not yet), and it let’s me modify my model for understanding human values to push “decisions” outside of it or into the after-the-fact part.
Thank you for taking your time to think about this approach, and I am happy it makes sense. I like your summary. Feel free to message me if you want to discuss this some more.
My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.
There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren’t really any objective intentional “decisions”, only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn’t matter though, because you have no choice but to feel that decision theories are important.
So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.
I wrote about this over the last few years:
https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty
https://www.lesswrong.com/posts/TQvSZ4n4BuntC22Af/decisions-are-not-about-changing-the-world-they-are-about
https://www.lesswrong.com/posts/436REfuffDacQRbzq/logical-counterfactuals-are-low-res
Thanks, I’ll revisit these. They seem like they might be pointing towards a useful resolution I can use to better model values.
Feel free to let me know either way, even if you find that the posts seem totally wrong or missing the point.
Okay, so now that I’ve had more time to think about it, I do really like the idea of thinking of “decisions” as the subjective expression of what it feels like to learn what universe you are in, and this holds true for the third-person perspective of considering the “decisions” of others: they still go through the whole process that feels from the inside like choosing or deciding, but from the outside there is no need to appeal to this to talk about “decisions”. Instead, to the outside observers, “decisions” are just resolutions of uncertainty about what will happen to a part of the universe modeled as another agent.
This seems quite elegant for my purposes, as I don’t run into the problems associated with formalizing UDT (at least, not yet), and it let’s me modify my model for understanding human values to push “decisions” outside of it or into the after-the-fact part.
Thank you for taking your time to think about this approach, and I am happy it makes sense. I like your summary. Feel free to message me if you want to discuss this some more.