Do you know if Plato was claiming Euclidean geometry was physically true in that sense? Doesn’t sound like something he would say.
Shiroe
I’d like to see how this would compare to a human organization. Suppose individual workers or individual worker-interactions are all highly faithful in a tech company. Naturally, though, the entire tech company will begin exhibiting misalignment, tend towards blind profit seeking, etc. Despite the faithfulness of its individual parts.
Is that the kind of situation you’re thinking of here? Is that why having mind-reading equipment that forced all the workers to dump their inner monologue wouldn’t actually be of much use towards aligning the overall system, because the real problem is something like the aggregate or “emergent” behavior of the system, rather than the faithfulness of the individual parts?
What do you mean by “over the world”? Are you including human coordination problems in this?
Did you end up writing the list of interventions? I’d like to try some of them. (I also don’t want to commit to doing 3 hours a day for two weeks until I know what the interventions are.)
It’s very surprising to me that he would think there’s a real chance of all humans collectively deciding to not build AGI, and successfully enforcing the ban indefinitely.
Patternism is usually defined as a belief about the metaphysics of consciousness, but that boils down to incoherence, so it’s better defined as a property of a utility function of agents not minding being subjected to major discontinuities in functionality, ie, being frozen, deconstructed, reduced to a pattern of information, reconstructed in another time and place, and resumed.
That still sounds like a metaphysical belief, and less empirical since consciousness experience isn’t involved in it (instead it sounds like it’s just about personal identity).
Any suggestions for password management?
Because it’s an individualized approach that is a WIP and if I just write it down 99% of people will execute it badly.
Why is that a problem? Do you mean this in the sense of “if I do this, it will lead to people making false claims that my experiment doesn’t replicate” or “if I do this, nothing good will come of it so it’s not even worth the effort of writing”.
I’m confused whether:
the point of this article is that the IQ tests are broken, because some trivial life improvements (like doing yoga and eating blueberries) will raise your IQ score or whether:
the point of this article is that you actually raised your “g” by doing trivial life improvements, and we should be excited by how easy it is to become more intelligent
Skimming it again I’m pretty sure you mean (2).
If I understand right the last sentence should say “does not hold”.
It’s not easy to see the argument for treating your vales as incomparable with the values of other people, but seeing your future self’s values as identical to your own. Unless you’ve adopted some idea of a personal soul.
The suffering and evil present in the world has no bearing on God’s existence. I’ve always failed to buy into that idea. Sure, it sucks. But it has no bearing on the metaphysical reality of a God. If God does not save children—yikes I guess? What difference does it make? A creator as powerful as has been hypothesised can do whatever he wants; any arguments from rationalism be damned.
Of course, the existence of pointless suffering isn’t an argument against the existence of a god. But it is an old argument against the existence of a god who deserves to be worshipped with sincerity. We might even admit that there is a cruel deity, and still say non serviam, which I think is a more definite act of atheism than merely doubting any deity’s existence.
“tensorware” sprang to mind
Yeah, it’s hard to say whether this would require restructuring the whole reward center in the brain or if the needed functionality is already there, but just needs to be configured with different “settings” to change the origin and truncate everything below zero.
My intuition is that evolution is blind to how our experiences feel in themselves. I think it’s only the relative differences between experiences that matter for signaling in our reward center. This makes a lot of sense when thinking about color and “qualia inversion” thought experiments, but it’s trickier with valence. My color vision could become inverted tomorrow, and it would hardly affect my daily routine. But not so if my valences were inverted.
What about our pre-human ancestors? Is the twist that humans can’t have negative valences either?
I agreed up until the “euthanize everything that remains” part. If we actually get to the stage of having aligned ASI, there are probably other options with the same or better value. The “gradients of bliss” that I described in another comment may be one.
Pearce has the idea of “gradients of bliss”, which he uses to try to address the problem you raised about insensitivity to pain being hazardous. He thinks that even if all of the valences are positive, the animal can still be motivated to avoid danger if doing so yields an even greater positive valence than the alternatives. So the prey animals are happy to be eaten, but much more happy to run away.
To me, this seems possible in principle. When I feel happy, I’m still motivated at some low level to do things that will make me even happier, even though I was already happy to begin with. But actually implementing “gradients of bliss” in biology seems like a post-ASI feat of engineering.
(By the way, your idea of predation-induced unconsciousness isn’t one I had heard before, it’s interesting.)
What are your thoughts on David Pearce’s “abolitionist” project? He suggests genetically engineering wild animals to not experience negative valences, but still show the same outward behavior. From a sentientist stand-point, this solves the entire problem, without visibly changing anything.
Same. I feel somewhat jealous of people who can have a visceral in-body emotional reaction to X-risks. For most of my life I’ve been trying to convince my lizard brain to feel emotions that reflect my beliefs about the future, but it’s never cooperated with me.
The existence of God and Free Will feel like religious problems that philosophers took interest in, and good riddance to them.
Whether the experience of suffering/pain is fictional or not is a hot topic in some circles, but both sides are quite insistent about being good church-going “materialists” (whatever that means).
As for “knowledge”, I agree that question falls apart into a million little subproblems. But it took the work of analytic philosophers to pull it apart, and after much labor. You’re currently reaping the rewards of that work and the simplicity of hindsight.