Are you saying that you don’t think there’s any fact of the matter whether or not you have phenomenal experiences like suffering? Or do you mean that phenomenal experience is unreal in the same way that the hellscape described by Dante is unreal?
Shiroe
I don’t like “illusionism” either, since it makes it seem like illusionists are merely claiming that consciousness is an illusion, i.e., it is something different than what it seems to be. That claim isn’t very shocking or novel, but illusionists aren’t claiming that. They’re actually claiming that you aren’t having any internal experience in the first place. There isn’t any illusion.
“Fictionalism” would be a better term than “illusionism”: when people say they are having a bad experience, or an experience of saltiness, they are just describing a fictional character.
Exactly. I wish the economic alignment issue was brought up more often.
You’re right. I’m updating towards illusionism being orthogonal to anthropics in terms of betting behavior, though the upshot is still obscure to me.
I agree realism is underrated. Or at least the term is underrated. It’s the best way to frame ideas about sentientism (in the sense of hedonic utilitarianism). On the other hand, you seem to be talking more about rhetorical benefits of normative realism about laws.
Most people seem to think phenomenal valence is subjective, but that’s confusing the polysemy of the word “subjective”, which can mean either arbitrary or bound to a first-person subject. All observations (including valenced states like suffering) are subjective in the second sense, but not in the first. We have good evidence for believing that our qualities of experience are correlated across a great many sentient beings, rather than being some kind of private uncorrelated noise.
“Moral realism” is a good way to describe this situation that we’re in as observers of such correlated valences, even if God-decreed rules of conduct isn’t what we mean by that term.
it is easy to cooperate on the shared goal of not dying
Were you here for Petrov Day? /snark
But I’m confused what you mean about a Pivotal Act being unnecessary. Although both you and a megacorp want to survive, you each have very different priors about what is risky. Even if the megacorp believes your alignment program will work as advertised, that only compels them to cooperate with you if they are (1) genuinely concerned about risk in the first place, (2) believe alignment is so hard that they will need your solution, and (3) actually possess the institutional coordination abilities needed.
And this is just for one org.
World B has a 1, maybe minus epsilon chance of solving alignment, since the solution is already there.
That is totum pro parte. It’s not World B which has a solution at hand. It’s you who have a solution at hand, and a world that you have to convince to come to a screeching halt. Meanwhile people are raising millions of dollars to build AGI and don’t believe it’s a risk in the first place. The solution you have in hand has no significance for them. In fact, you are a threat to them, since there’s very little chance that your utopian vision will match up with theirs.
You say World B has chance 1 minus epsilon. I would say epsilon is a better ballpark, unless the whole world is already at your mercy for some reason.
Okay, let’s operationalize this.
Button A: The state of alignment technology is unchanged, but all the world’s governments develop a strong commitment to coordinate on AGI. Solving the alignment problem becomes the number one focus of human civilization, and everyone just groks how important it is and sets aside their differences to work together.
Button B: The minds and norms of humans are unchanged, but you are given a program by an alien that, if combined with an AGI, will align that AGI in some kind of way that you would ultimately find satisfying.
World B may sound like LW’s dream come true, but the question looms: “Now what?” Wait for Magma Corp to build their superintelligent profit maximizer, and then kindly ask them to let you walk in and take control over it?
I would rather live in world A. If I was a billionaire or dictator, I would consider B more seriously. Perhaps the question lurking in the background is this: do you want an unrealistic Long Reflection or a tiny chance to commit a Pivotal Act? I don’t believe there’s a third option, but I hope I’m wrong.
I agree that the political problem of globally coordinating non-abuse is more ominous than solving technical alignment. If I had the option to solve one magically, I would definitely choose the political problem.
What it looks like right now is that we’re scrambling to build alignment tech that corporations will simply ignore, because it will conflict with optimizing for (short-term) profits. In a word: Moloch.
It’s happened before though. Despite being one of those 2 friends, I’ve already been forced to change my habits and regard videocalls as a valid form of communication.
none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn’t that entail that we should be Halfers in the SBP? Not saying it’s wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
I can see how a computer could simulate any anthropic reasoner’s thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren’t the Halfers going to be winning on average?
Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn’t the whole population converge to 0.5?
Can you explain what you mean by “underdetermined” in this context? How is there any ambiguity in resolving the payouts if the game is run as a third person simulation?
If I program a simulation of the SBP and run it under illusionist principles, aren’t the simulated Halfers going to inevitably win on average? After all, it’s a fair coin.
So you’d say that it’s coherent to be an illusionist who rejects the Halfer position in the SBP?
I’m fine with everything on LW ultimately being tied to alignment. Hardcore materialism being used as a working assumption seems like a good pragmatic measure as well. But ideally there should also be room for foundational discussions like “how do we know our utility function?” and “what does it mean for something to be aligned?” Having trapped priors on foundational issues seems dangerous to me.
Thanks. That solved my issue.
[Question] How does anthropic reasoning and illusionism/eliminitivism interact?
What would it be conscious of, though? Could it feel a headache when you gave it a difficult riddle? I don’t think a look-up table can be conscious of anything except for matching bytes to bytes. Perhaps that corresponds to our experience of recognizing that two geometric forms are identical.
In the sense that you mean this, this is a general argument against the existence of everything, because ultimately words have to be defined either in terms of other words or in terms of things that aren’t words. Your ontology has the same problem, to the same degree or worse. But we only need to give particular examples of conscious experience, like suffering. There’s no need to prove that there is some essence of consciousness. Theories that deny the existence of these particular examples are (at best) at odds with empiricism.
It’s deeply unclear to me what you mean by this. If you’re denying that you have phenomenal experiences like suffering (i.e. negative valences), your rational decision making should be strongly affected by this belief. In the same way that someone who has stopped believing in Hell and Heaven should change their behavior to account for this radical change in their ontology.