Thanks a lot for the link. I’ll put it in the reading list (if you don’t mind).
I would be interested to hear what you think about the more technical version of the problem. Do you also think that that can have no good solution, or do you think that a solution just won’t have the nice philosophical consequences?
Also, I’m excited to know a smart waterfall apologist and if you’re up for it I would really like to talk more with you about the argument in your thesis when I have thought about it a bit more.
I’m glad you are interested, and I’d love to hear your thoughts on the paper if you read it. I’d love to talk with you too; just send me an email when you’d like and we can skype or something.
What do you mean by “the more technical version of the problem” exactly?
My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)
Thanks a lot for the link. I’ll put it in the reading list (if you don’t mind).
I would be interested to hear what you think about the more technical version of the problem. Do you also think that that can have no good solution, or do you think that a solution just won’t have the nice philosophical consequences?
Also, I’m excited to know a smart waterfall apologist and if you’re up for it I would really like to talk more with you about the argument in your thesis when I have thought about it a bit more.
I’m glad you are interested, and I’d love to hear your thoughts on the paper if you read it. I’d love to talk with you too; just send me an email when you’d like and we can skype or something.
What do you mean by “the more technical version of the problem” exactly?
My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)