Many years ago I wrote my undergraduate thesis on the waterfall problem (though it went by another name to me). Basically, I painstakingly and laboriously transformed an arbitrary human into an arbitrary rock of sufficient size, via a series of imperceptibly tiny steps none of which can be felt by the human. (I did this in imagination, not in reality, to be clear) The point was to see if any of the steps seemed like good places to draw a line and say “Here, consciousness is starting to go out; the system is starting to be less of a person.” As a result I became fairly convinced that there aren’t any good places to draw the line. So I guess I’m a waterfall apologist now!
Thanks a lot for the link. I’ll put it in the reading list (if you don’t mind).
I would be interested to hear what you think about the more technical version of the problem. Do you also think that that can have no good solution, or do you think that a solution just won’t have the nice philosophical consequences?
Also, I’m excited to know a smart waterfall apologist and if you’re up for it I would really like to talk more with you about the argument in your thesis when I have thought about it a bit more.
I’m glad you are interested, and I’d love to hear your thoughts on the paper if you read it. I’d love to talk with you too; just send me an email when you’d like and we can skype or something.
What do you mean by “the more technical version of the problem” exactly?
My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)
Thanks, this is a good write-up!
Many years ago I wrote my undergraduate thesis on the waterfall problem (though it went by another name to me). Basically, I painstakingly and laboriously transformed an arbitrary human into an arbitrary rock of sufficient size, via a series of imperceptibly tiny steps none of which can be felt by the human. (I did this in imagination, not in reality, to be clear) The point was to see if any of the steps seemed like good places to draw a line and say “Here, consciousness is starting to go out; the system is starting to be less of a person.” As a result I became fairly convinced that there aren’t any good places to draw the line. So I guess I’m a waterfall apologist now!
Thanks a lot for the link. I’ll put it in the reading list (if you don’t mind).
I would be interested to hear what you think about the more technical version of the problem. Do you also think that that can have no good solution, or do you think that a solution just won’t have the nice philosophical consequences?
Also, I’m excited to know a smart waterfall apologist and if you’re up for it I would really like to talk more with you about the argument in your thesis when I have thought about it a bit more.
I’m glad you are interested, and I’d love to hear your thoughts on the paper if you read it. I’d love to talk with you too; just send me an email when you’d like and we can skype or something.
What do you mean by “the more technical version of the problem” exactly?
My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)