I’m glad you are interested, and I’d love to hear your thoughts on the paper if you read it. I’d love to talk with you too; just send me an email when you’d like and we can skype or something.
What do you mean by “the more technical version of the problem” exactly?
My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)
I’m glad you are interested, and I’d love to hear your thoughts on the paper if you read it. I’d love to talk with you too; just send me an email when you’d like and we can skype or something.
What do you mean by “the more technical version of the problem” exactly?
My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)