Chronological order appears to be actually much easier now that you can just click the sequence_reruns tag, although there may be more effective ways. The benefit of the rerun method is you can see all the more recent discussion that took place when the rerun post was posted.
Maelin
I’d say it is a superset of {Human Intelligence} (since humans can easily fabricate more human intelligences—it’s so easy, we often start the process entirely by accident) and a subset of {Possible AI} (since there are almost certainly likely to be mind designs that are too complex, too alien, or too somethingelse to be near-term feasible.)
Whether {Near-Term Human-Inventible AIs} has a large vs small overlap with {Evolved Intelligences} is an interesting question, but not one for which I can think of any compelling arguments offhand.
Any rough figures yet about how the cost will break down into life insurance payout vs ongoing fees?
As a young and fairly healthy adult I’m not really concerned about final costs (life insurance is cheap!) but as a low-income earner (at the moment) I am concerned about annual fees.
I personally spent some time trying to get signed up with Alcor, and eventually gave up in frustration. I understand they have also had some funding issues of late. I do not have great confidence in their ability to remain operational over the next few decades. If an organisation that looks competent and reliable could open a facility in Australia, I would feel much safer than if it were something arranged to send me off to the US.
That said, I’d be a lot happier with either option than with the current state of affairs.
I plan to be there.
Similarly, you can eliminate the sentence ‘rational’ from almost any sentence in which it appears. “It’s rational to believe the sky is blue”, “It’s true that the sky is blue”, and “The sky is blue”, all convey exactly the same information about what color you think the sky is—no more, no less.
I might be missing the point of this paragraph, but it seems to me that “it’s rational to believe the sky is blue” and “the sky is blue” do not convey the same information. I can conceive of situations in which it is rational to believe the sky is blue, and yet the sky is not blue. For example, the sky is green, but superintelligent alien pranksters install undetected nanotech devices into my optic and auditory nerves/brain, altering my perceptions and memories so that I see the green sky as blue, and hear (read) the word “blue” where other people have actually said (written) the word “green” when describing the sky.
Under these circumstances, all my evidence would indicate the sky is blue—and so it would be rational to believe that the sky is blue. And yet the sky is not blue. But the first statement doesn’t feel like I am generalising over cognitive algorithms in the sense I took from the big paragraph.
Am I missing or misinterpreting something?
We may well be able to do both! I’ll also be bringing a carefully selected subset of my Lego collection, so we can try playing Zendo, if we have time.
Not by me! One friend immediately replied, describing living forever as an ‘unbelievably horrible concept’. An argument developed in which he claimed to be opposed to medicine in general because it results in overpopulation. A few friends who are LW readers and/or transhumanists joined in and it grew a bit one-sided. I tried to inoffensively explain that his arguments seemed to be neglecting the fact that the other human problems we have (which would be exacerbated by cured aging) might also be solvable, failed at the ‘inoffensive’ part, and he got angry at me for thinking I knew what he was thinking better than he did and then abandoned the thread. Afterwards a few of the people who’ve ‘already seen the light’ discussed a few interesting points about the topic, like hard limits on energy consumption/use, etc.
I’ll take partial blame—I didn’t work hard enough to maintain civility, in my own posts or in the atmosphere in general. I have previously argued with this person about life extension etc and found that he pattern matches very promptly into the ‘typical’ opposer—who summons up every problem they can connect to the idea immediately without any thought for plausibility or relevance.
Overall it was a bit of a disappointment but some friends who I don’t think would have been exposed to the idea much otherwise did put some ‘likes’ on a few comments, which is heartening.
I know this image is pretty much pure applause light and no substance, but I think it might serve as a great talking point to post on Facebook for all my less rational friends.
A couple of months ago I tried a modafinil tablet when I had to do one of the many assignments I have for my current course (studying to be a high school maths/science/IT teacher). I have developed huge, unbearable ugh fields with the assignments for this course because they require me to write lots and lots of bullshit, which I am quite good at but despise doing.
Anyway, at the advice of another LW member I made a to-do list before I took the modafinil, and I found that I powered through the assignment without even feeling like a break. It took longer than expected (as usual) and I didn’t end up accomplishing anything else on the list, but I found I felt really positive and switched-on the whole time. I didn’t use modafinil again for a while.
So the other day, after a friend cajoled me for not experimenting again, I tried it out once more for a different context—this time playing a board game (Small World) at a friend’s place. I was curious to see if it would have any effects in a social context. Unfortunately this time I didn’t really notice any change at all, and afterwards I asked a couple of friends if I’d seemed different in any way and they hadn’t noticed anything either.
So I’m now thinking about trying some testing. I’ve debated buying a capsuling machine and making some placebo capsules but it seems like grinding and capsuling would be a lot of effort; someone suggested I could just just encase the pills in peanut butter or nutella or similar and swallow them that way. I have two housemates who could easily help me set up a double blind test.
Any advice?
- Jul 2, 2012, 6:04 AM; 2 points) 's comment on Group rationality diary, 6/25/12 by (
I have very scraggly facial hair—it grows very patchily above my jawline (i.e. it’s mostly neckbeard) and it looks terrible, so I shave daily. I have an electric shaver with a self-cleaning dock station which was a gift from my parents. My facial hair grows in various directions which makes shaving it a pain. I have to run the shaver over it in different directions, and run my fingers ahead of the shaver to pull the stubble up so the shaver will catch and trim it.
Shaving takes me about two or three minutes each morning, right before I shower. After 24 hours I will have very noticeable rough stubble which I find unpleasant, hence shaving daily. Curiously, I’ve found that it seems to take pretty much at least 22 hours for it to be the right length for the shaver to be maximally effective. If I shave late one day (say 1pm) and then early the next day (say 8am) I find that it doesn’t shave effectively, and I will have rough stubble again much sooner than if I’d shaved at 8am the previous day too.
I’ve used a safety razor a few times in the past which I found did give me a smoother shave (skin felt very smooth, instead of slightly rough like it does normally), but it took ages of faffing about with the shaving cream and washing the razor and everything and is just not nearly worth the benefit, unless I am doing something very fancy. I haven’t experimented with this method much at all.
I plan to be there.
I plan to be there.
I’m not saying this is necessarily the case for the people you’ve met, but remember that appearing to care more about research than status is high-status in academia.
No, the onus is on you to show that TiLiABiNT is better than TiLiRuBaBiNT, if you want me to adopt any specific interpretation. Until then, I will happily use the relevant math without worrying whether the rubber bands are magical or sapient.
Right, but now let’s imagine a world in which you heard TiLiABiNT (henceforth: angel theory) before you heard TiLiRuBaBiNT (henceforth: rubber theory)..Might you not equally be arguing “No, the onus is on you to show that rubber theory is better than angel theory”?
If your decision process when faced with competing, equally supported theories is simply to stick with whichever one you happened to hear first, if you deny that (quantified) application of Occam’s Razor is a worthwhile tool to apply to competing theories that explain observations equally well, then you open yourself to holding beliefs with many useless, nonfunctional extra attachments stuck on. You could just as easily have heard angel theory before rubber theory. You want a mind that would settle on the less ridiculous of the two regardless of what order it heard them in.
This post would be about a thousand times easier to follow if it wasn’t all in-line text based maths.
On a tangent (and just for my curiosity), can you explain/link an explanation of what the phrase “non-empty reference class” means? I infer from context it means that there is a non-empty set of instances, but what is the meaning of this specific ‘reference class’ wording?
I’m not sure how your “the qualia of observing a qualia is the qualia itself” is different from my “a qualia observes itself”.
The difference, I think, is that there is an observer having the qualia, rather than just a qualia happening by itself without a qualia-haver to have it.
This is starting to feel very nebulous and free-floaty. I feel like the words we are using are not locking on very strongly to robust concepts in my mind. It may not be a productive line of discussion.
That makes a bit more sense, but I still disagree. We don’t have any problems with infinite regresses elsewhere that require such drastic denials of our own existence. I can think about a thing, and then I can think about thinking about the thing, and then I can think about doing that, and so on. But we don’t feel compelled to say “actually the thinking is happening without anything doing it” to rectify this. The infinite regress doesn’t seem to cause any problems, and in that case it’s an actual infinite regress occurring in my brain, not just a semantic infinite regress occurring in our definitions of ‘qualia’.
It seems like the problem can be just as easily solved by saying that the qualia of observing a qualia is the qualia itself. Why should they need to be separate? You experience the sensation of redness, and the experience of experiencing that sensation of redness is precisely the experience of the sensation of redness.
experience[experience[redness]] = experience[redness]
I would at least make one comment saying “Vote in the options below” and then have the options as replies to it. Specify that any discussion should occur outside of that thread. The way it is now, the various poll options are scattered haphazardly throughout the comments and are hard to find.
You should also include an extra reply as a karma sink, so people can balance out the upvotes with downvotes. Not having a karma sink actually gave me a note of reluctance to vote in the poll, which suggests including one will remove one barrier to participation.