Uhm, maybe I actually don’t understand the poem. I’ll read it over again.
EDIT: I still get the same message from the repeated lines, that the complex systems behind the surface can’t be beautiful, and are somehow innately terrible.
Uhm, maybe I actually don’t understand the poem. I’ll read it over again.
EDIT: I still get the same message from the repeated lines, that the complex systems behind the surface can’t be beautiful, and are somehow innately terrible.
I think this link explains my thoughts on the poem. http://lesswrong.com/lw/oo/explaining_vs_explaining_away/
The quote states that the current establishment has no idea what’s going on. How would they be competent enough in this state to band together, write people out of existence, then keep it a secret indefinitely?
Why should it be advantageous to break your reciever? You’ve been dropped in a wild, mountainous region, with only as many supplies as you can carry down on your parachute. Finding and coordinating with another human is to your advantage, even if you don’t get extracted immediately upon meeting the objective. The wilderness is no place to sit down on a hilltop. You need to find food, water, shelter and protection from predators, and doing this with someone else to help you is immensely easier. We formed tribes in the first place for exactly this reason.
I’ve experienced (well, also currently experiencing) a related fear of a specific part of rationality. I’ve seen some people on LW and many more on OvercomingBias express beliefs that the conscious mind, the part I can call me, is so out of control that all it’s good for is making up stories, rationalizing the actions of an unconscious mind guided by outdated programming and environmental factors.
Mostly, I think, I reject this idea because it would essentially mean declaring everything I’ve done, every decision I’ve ever made, and every decision I will make, as a lie. Not even one of those justifiable lies that people like to talk about in the face of radical honesty either. A huge, undefendable lie about every intention “I’ve” ever had. So essentially, by accepting such a belief, I’ve retroactively lied to everyone I know, myself included. All reasons for my actions can be thrown out the window, because none of them will ever be the actual reason unless I throw the burden of responsibility onto what is in essence a runaway mental train.
Every time someone asks me “Why do you think this is the best idea,” I’ll have to respond, “I don’t know, but it’s probably something horrifically self-serving. The driver does what they want, and I’m just here for the ride and to be the scapegoat when the idiot at the wheel does something wrong.”
Well, the most apparent alternative to cryonic preservation is death. I’d say it’s a good investment, if the worst thing that happens is that you die the way it would be expected without cryonics.
Justice, at least the way I’ve heard it used, is very much revenge without the stigma.
It’s possible that if there were several copies of Chell, some of them did.
While I respect priming and contamination as a bias, I think you’ve overdramaticized it in this article. Similar exaggerations of scientific findings for shock purposes has up until recently made me paranoid of attacks on my decision making process, and not just cognitive bias either. In fact, this being before I read LW, I don’t think I even considered cognitive biases other than what you call contamination here, and it still seriously screwed me up emotionally and socially.
So yes, concepts will cause someone to think of related, maybe compatible concepts. No, this is not mind control, and no, a flashed image on the screen will not rewrite all your utility functions, make you a paperclip maximizer, and kill your dog.
Where are you getting not capable?
So, if the person discussing this, and presumably the one choosing to be rational, is C, and it must necessarily fight against a selfish, flighty and almost completely uncaring U except in the cases where it percieves a direct benifit, and furthermore is assumed to have complete or nearly complete control over the person, then why be rational? The model described here makes rationality, rather than mere rationalization, literally impossible. Therefore, why try? Or did U’s just decide to force their C’s into this too, making such a model deterministic in all but name?
I also showed up to a previous meeting, and there was similarly no evidence that a meeting was even about to occur save for the online post. I waited for something obviously LessWrong related to happen, but nothing did for half an hour after the posted time.