Pebblesorters use words whose meaning feels mysterious to them, but the computation that encodes their preferences doesn’t need to use any such words, it just counts pebbles in heaps. This shoots down your argument.
Tangentially: the first time I read this comment I parsed the first word as “Philosophers.” Which rendered the comment puzzling, but not necessarily wrong.
It is not entirely clear to me that Pebblesorters are good standins for humans in this sort of analogy.
But, leaving that aside… applying Wei Dai’s argument to Pebblesorters involves asking whether Idealized Pebblesorters use words like “right” and “should” and “good” and “correct” and “proper” with respect to prime-numbered piles, the way Base Pebblesorters do.
I’m not sure what the answer to that question is. It seems to me that they just confuse themselves by doing so, but I feel that way about humans too.
You’re certainly right that the computation that encodes their preferences doesn’t involve words, but I don’t know what that has to do with anything. The computation that encodes our preferences doesn’t involve words either… and so?
The further along this track I go, the less meaningful the question seems. I guess I’m Just Not Getting It.
Pebblesorters use words whose meaning feels mysterious to them, but the computation that encodes their preferences doesn’t need to use any such words, it just counts pebbles in heaps. This shoots down your argument.
Tangentially: the first time I read this comment I parsed the first word as “Philosophers.” Which rendered the comment puzzling, but not necessarily wrong.
It is not entirely clear to me that Pebblesorters are good standins for humans in this sort of analogy.
But, leaving that aside… applying Wei Dai’s argument to Pebblesorters involves asking whether Idealized Pebblesorters use words like “right” and “should” and “good” and “correct” and “proper” with respect to prime-numbered piles, the way Base Pebblesorters do.
I’m not sure what the answer to that question is. It seems to me that they just confuse themselves by doing so, but I feel that way about humans too.
You’re certainly right that the computation that encodes their preferences doesn’t involve words, but I don’t know what that has to do with anything. The computation that encodes our preferences doesn’t involve words either… and so?
The further along this track I go, the less meaningful the question seems. I guess I’m Just Not Getting It.