Well Blindsight impressed me enough, that I’ve started The Ego Tunnel. In short, the idea of unconscious intelligence bothered me. My intuition says that consciousness could be what happens when something tries to model its intelligence and actions, but of course that hardly explains anything. While I feel like it’s unlikely I’ll find many good answers, it is interesting enough to be enjoyable to read.
mgg
Alright. I’ve read the first few page or so of the first link “Consciousness and its Place in Nature”, and it seems to boil down to “We can think of zombies without our current minds seeing any major issue (a priori!), therefore consciousness isn’t physical.”
That sums up the current state of knowledge
Which was sort of my question: Do I have a whole lot to gain by reading the current information available? Will I obtain valuable insights on things, or even be rather entertained? Or am I just gonna end up in the same place, but with a deeper respect for how difficult it is to figure things out?
OK, given the strong reaction to my comment I will check it out. I’d love to be in for a big update, but the whole zombie thing is so generally perplexing how anyone can take that seriously without being outright dualistic that it’ll really be a huge update for me.
Isn’t that the same guy that takes pzombies seriously? I find it hard to imagine someone with such a basic misunderstanding would be capable of handling consciousness.
Thanks for the reply. Yes I found out the term is “negative utilitarianism”. I suppose I can search and find rebuttals of that concept. I didn’t mean that the function was “if suffering > 0 then 0″, just that suffering should be a massively dominating term, so that no possible worlds with real suffering outrank worlds with less suffering.
As to your question about my personal preference on life, it really depends on the level of suffering. At the moment, no, things are alright. But it has not always been that way, and it’s not hard to see it crossing over again.
I would definitely obliterate everyone on Earth, though, and would view not doing so, if capable, to be immoral. Purely because so many sentient creatures are undergoing a terrible existence, and the fact that you and me are having an alright time doesn’t make up for it.
Good points. But I’m thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I’d do it.
The bug of scope insensitivity doesn’t apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour—that’d cause a spike of suffering, but then levels drop to 0, so I think it’s alright. Whereas an asteroid that kills 90% of people, that’d leave a huge amount of suffering left for the survivors.
In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?
If the suffering “rounds down” to 0 for everyone, sure, A is fine. That is, a bit of pain in order to keep Fun. But no hellish levels of suffering for anyone. Otherwise, B. Given how the world currently looks, and MWI, it’s hard to see how it’s possible to end up with everyone having pain that rounds down to 0.
So given the current world and my current understanding, if someone gave me a button to press that’d eliminate earth in a minute or so, I’d press it without hesitation.
It is if we define a utility function with a strict failure mode for TotalSuffering > 0. Non-existent people don’t really count, do they?
Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.
One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it’s actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it “all averages out to normal”, but tell that to someone in a hell branch.
The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.
But he views extinction-level events as “that much worse” than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there’s no suffering left.
I’m not against others being happy and successful, and sure, that’s better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family—if I could, I’d erase the entire lot of us, but it’s just not practical.
This is something to think about, thanks.
What about the seeming preference for existence over non-existence? How do you morally justify keeping people around when there is so much suffering? In the specs versus torture, why not simply erase everyone?
Why does Eliezer love me?
In many articles, EY mentions that Death is bad, as if it’s some terminal value. That even the loss of me, is somehow negative for him. Why?
I’ve been thinking that it’s Suffering that should be minimized, in general. Death is only painful for people because of the loss others suffer. Yes, the logical conclusion is that we should completely destroy the universe, in a quick and painless manner. The “painless” part is the catch, of course, and it may be so intractable as to render the entire thought pointless. (That is, we cannot achieve this, so might as well give up and focus on making things better.)
Even outside of Suffering, I still do not see why an arbitrary person is to be valued. Again, EY seems to have this as some terminal value. Why?
I love my children, I love my family, I love some friends. After that, I don’t really care all that much about individuals, except to the extent that I’d prefer them to not suffer. I certainly don’t feel their existence alone is something that valuable, intrinsically.
Am I wicked or something? Am I missing some basic reasoning? I see my viewpoint may be viewed as “negative utilitarian”, but I haven’t come across anything in particular that makes such a position less desirable.
That sort of confirms my suspicion—that it’s a very active topic. And it’s not necessarily easy to break into. I was hoping there was a good pop-sci summary book that laid things out real nicely. Like what The Selfish Gene does for evolution. But I read the book Blindsight, and am now reading Metzinger’s The Ego Tunnel, just because it seemed incredibly interesting. So who knows how deep this will go for me :)