I see. Yes, “philosophy” often refers to particular academic subcultures, with people who do their philosophy for a living as “philosophers” (Plato had a better name for these people). I misread your comment at first and thought it was the “philosopher” who was arguing for the instrumentalist view, since that seems like their more stereotypical way of thinking and deconstructing things (whereas the more grounded physicist would just say “yes, you moron, electrons exist. next question.”).
Shiroe
Do you have any examples of the “certain philosophers” that you mentioned? I’ve often heard of such people described that way, but I can’t think of anyone who’s insulted scientists for assuming e.g. causality is real.
On the contrary, it is my intention to illustrate that assertions of instances that have not been experienced (with respect to their assertion at t1) can be justified in the future in which they are observed (with respect to their observation at t2).
Sorry, I may not be following this right. I had thought the point of the skeptical argument was that you can’t justify a prediction about the future until it happens. Induction is about predicting things that haven’t happened yet. You don’t seem to be denying the skeptical argument here, if we still need to wait for the prediction to resolve before it can be justified.
I’ve also noticed that scaffolded LLM agents seem inherently safer. In particular, deceptive alignment would be hard for one such agent to achieve, if at every thought-step it has to reformulate its complete mind state into the English language just in order to think at all.
You might be interested in some work done by the ARC Evals team, who prioritize this type of agent for capability testing.
I’m sorry that comparing my position to yours led to some confusion: I don’t deny the reality of 3rd person facts. They probably are real, or at least it would be more surprising if they weren’t than if they were. (If not, then where would all of the apparent complexity of 1st person experience come from? It seems positing an external world is a good step in the right direction to answering this). My comparison was about which one we consider to be essential. If I had used only “pragmatist” and “agnostic” as descriptors, it would have been less confusing.
Again, I think the main difference between our positions is how we define standards of evidence. To me, it would be surprising if someone came to know 3rd person facts without using 1st person facts in the process. If the 1st person facts are false, this casts serious doubt on the 3rd person facts which were implied. At our stage of the conversation, it seems like we can start proposing far more effective theories, like that nothing exists at all, which explains just as much of the available evidence we still have if we have no 1st person facts.
You seem to believe we can get at the true third person reality directly, maybe imagining we are equivalent to it. You can imagine a robot (i.e. one of us) having its pseudo-experiences and pseudo-observations all strictly happening in the 3rd person, even engaging in scientific pursuits, without needing to introduce an idea like the 1st person. But as you said earlier, just because you can imagine something, doesn’t mean that it’s possible. You need to start with the evidence available to you, not what sounds reasonable to you. The idea of that robot is occurring in your 1st person perspective as a mental experience, which means it counts as evidence for the 1st person perspective at least as much as it counts as evidence for the 3rd. So does what it feels like to think eliminitivism is possible, and so does what it feels like to chew 5 Gum® and etc, and etc.
To me, all of this is a boring tautology. For you, it’s more like a boring absurdity, or rather it’s the truth turned upside down and pulled inside out. This is why I’m more interested in finding a double crux, something that would reveal the precise points where our thinking diverges and reconverges. There are already some parallels that we’ve both noticed, I think. I would say that you believe in the 1st person but with only one authentic observer: God, who is and who sees everything with perfect indifference, like in Spinoza’s idea. You could also reframe my notion of the 1st person to be a kind of splintered or shattered 3rd person reality, one which can never totally connect itself back together all at once. Our ways of explaining away the problems are essentially the same: we both stress that our folk theoretic concepts are untrustworthy, that we are deceiving ourselves, that we apply a theory which shapes our interpretations without us realizing it. We are also both missing quite a few teeth, from biting quite a few bullets.
There must be some precise moment where our thinking diverges. What is that point? It seems like something we need to use a double crux to find. Do you have any ideas?
If I had to choose between those two phrasings I would prefer the second one, for being the most compatible between both of our notions. My notion of “emerges from” is probably too different from yours.
The main difference seems to be that you’re a realist about the third-person perspective, whereas I’m a nominalist about it, to use your earlier terms. Maybe “agnostic” or “pragmatist” would be good descriptors too. The third-person is a useful concept for navigating the first-person world (i.e. the one that we are actually experiencing). But that it seems useful is really all that we can say about it, due to the epistemological limitations we have as human observers.
I think this is why it would be a good double crux if we used the issue of epistemological priority: I would think very differently about Hard Problem related questions if I became convinced that the 3rd person had higher priority than the 1st person perspective. Do you think this works as a double crux? Is it symmetrical for you as well in the right way?
I meant subjective in the sense of “pertaining to a subject’s frame of reference”, not subjective in the sense of “arbitrary opinion”. I’m sorry if that was unclear.
But all of these observations are also happening from a third-person perspective, just like the rest of reality.
This is a hypothesis, based on information in your first-person perspective. To make arguments about a third-person reality, you will always have to start with first-person facts (and not the other way around). This is why the first person is epistemologically more fundamental.
It’s possible to doubt that there is a third-person perspective (e.g. to doubt that there’s anything like being God). But our first person perspective is primary, and cannot be escaped from. Optical illusions and stage tricks aren’t very relevant to this, except in showing that even our errors require a first-person perspective to occur.
EDIT: The third-person perspective being epistemologically more/less fundamental than the first-person perspective could work as a double crux with me. Does it work on your end as well?
You don’t believe that all human observations are necessarily made from a first-person viewpoint? Can you give a counter-example? All I can think of are claims that involve the paranormal or supernatural.
I don’t think I fall into either camp because I think the question is ambiguous. It could be talking about the natural structure of space and time (“mathematics”) or it could be talking about our notation and calculation methods (“mathematics”). The answer to the question is “it depends what you mean”.
The nominalist vs realist issue doesn’t appear very related to my understanding of the Hard Problem, which is more about the definition of what counts as valid evidence. Eliminitivism says that subjective observations are problematic. But all observations are subjective (first person), so defining what counts as valid evidence is still unresolved.
I appreciate hearing your view; I don’t have any comments to make. I’m mostly interested in finding a double crux.
This isn’t really a double crux, but it could help me think of one:
If someone becomes convinced that there isn’t any afterlife, would this rationally affect their behavior? Can you think of a case where someone believed in Heaven and Hell, had acted rationally in accordance with that belief, then stopped believing in Heaven and Hell, but still acted just the same way as they did before? We’re assuming their utility function hasn’t changed, just their ontology.
Here are some cruxes, stated from what I take to be your perspective:
That there’s nothing at stake whether or not we have first person experiences of the kind that eliminitivists deny; it makes no practical difference to our lives whether we’re so-called “automatons” or “zombies”, such terms being only theoretical distinctions. Specifically it should make no difference to a rational ethical utilitarian whether or not eliminitivism happens to be true. Resources should be allocated the same way in either case, because there’s nothing at stake.
Eliminitivism is a more parsimonious theory than non-eliminitivism, and is strictly better than it for scientific purposes; elimitivism already explains all of the facts about our world, and adding so-called “first person experiences” is just a cog which won’t connect to anything else; removing it wouldn’t require arbitrary double standards for the validity of evidence.
There’s no way of separating experience from functionality in a system. If an organism manifests consistent and enduring behaviors of self-preservation, goal-seeking, etc. then it must have experiences, regardless of how the organism itself happens to be constructed.
I’m looking for double cruxes now. The first two don’t seem very useful to me as double cruxes, but maybe the last one is. Any ideas?
because such sensations would be equivalent to predictions that I would be burning alive, which would be false and therefore interfere with my functioning
I don’t see a necessary equivalence here. You could be fully aware that the sensations were inaccurate, or hallucinated. But it would still hurt just as much.
if you could have a body which doesn’t experience, then it’s not going to function as normal.
A human body, or any kind of body? It seems like a robot could engage in the same self-preservation behavior as a human without needing to have anything like burning sensations. I can imagine a sort of AI prosthesis for people born with congenital insensitivity to pain that would make their hand jerk away from a burning hot surface, despite them not ever experiencing pain or even knowing what it is.
You seem to be claiming that you have experiences, but that their role is purely functional. If you were to experience all tactile sensations as degrees of being burnt alive, but you could still make predictions just as well as before, it wouldn’t make any difference to you?
It’s plausible that reverse-engineering the human mind requires tools that are much more powerful than the human mind.
So you don’t believe there is such a thing as first-person phenomenal experiences, sort of like Brian Tomasik? Could you give an example or counterexample of what would or wouldn’t qualify as such an experience?
Doesn’t “direct” have the implication of “certain” here?
Response in favor of the assumption that Signer said was detrimental.
but my current theory is that one such detrimental assumption is “I have direct knowledge of content of my experiences”
It’s true this is the weakest link, since instances of the template “I have direct knowledge of X” sound presumptuous and have an extremely bad track record.
The only serious response in favor of the presumptuous assumption [edit] that I can think of is epiphenomenalism in the sense of “I simply am my experiences”, with self-identity (i.e. X = X) filling the role of “having direct knowledge of X”. For explaining how we’re able to have conversations about “epiphenomenalism” without it playing any local causal role in us having these conversations, I’m optimistic that observation selection effects could end up explaining this.
I’m very curious about this technique but couldn’t find anything about it. Do you have any references I can read?