The Seven Secular Sermons guy. Long-time ingroup member. Current occupation: applied AI in media. Divorced dad of three in Leipzig, Germany.
chaosmage
I suspect it is creation of memories. You don’t experience time when you’re not creating memories, and they’re some kind of very subtle object that lasts from one moment to (at least) the next so they leave a very subtle trace in causality, and the input that goes into them is correlated in time, because it is (some small selection from) the perceptions and representations you had simultaneously when you formed the memory.
I even believe you experience a present moment particularly intensely when you’re creating a long-term memory—I use this to consciously choose to create long-term memories, and it subjectively seems to work.
I fail to see how that’s a problem.
Why “AI alignment” would better be renamed into “Artificial Intention research”
Let’s build a fire alarm for AGI
10 great reasons why Lex Fridman should invite Eliezer and Robin to re-do the FOOM debate on his podcast
That’s exactly right. It would be much better know a simple method of how to distinguish overconfidence from being actually right without a lot of work. In the absence of that, maybe tables like this can help people choose more epistemic humility.
Well of course there are no true non-relatives, even the sabertooth and antelopes are distant cousins. The question is how much you’re willing to give up for how distant cousins. Here I think the mechanism I describe changes the calculus.
I don’t think we know enough about the lifestyles of cultures/tribes in the ancestral environment, except we can be pretty sure they were extremely diverse. And all cultures we’ve ever found have some kind of incest taboo that promotes mating between members of different groups.
The biological function of love for non-kin is to gain the trust of people we cannot deceive
I am utterly in awe. This kind of content is why I keep coming back to LessWrong. Going to spend a couple of days or weeks digesting this...
Welcome. You’re making good points. I intend to make versions of this geared to various audiences but haven’t gotten around to it.
I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too
I will attempt to attend this.
A big bounty creates perverse incentives where one guy builds a dangerous AI in a jurisdiction where that isn’t a crime yet, and his friend reports him so they can share the bounty.
I did not know this, and I like it. Thank you!
No it doesn’t mean you shouldn’t be consequentialist. I’m challenging people to point out the flaw in the argument.
If you find the argument persuasive, and think the ability to “push the fat man” (without getting LW tangled up in the investigation) might be a resource worth keeping, the correct action to take is not to comment, and perhaps to downvote.
I find it too hard to keep things unrelated over time, so I prefer to keep thinking up new objects at what passes for random to my sleepy mind.
AI safety: the ultimate trolley problem
Yes, my method is to visualize a large collection of many small things that have no relation to each other, like a big shelf of random stuff. Sometimes I throw them in all directions. This is the best method I have found.
I think seeking status and pointing out you already have some are two different things. Writing an analysis, it would be quite relevant to mention what expertise or qualifications you have concerning the subject matter.
I’d like to complain that the original post popularizing really bright lights was mine in 2013: My simple hack for increased alertness and improved cognitive functioning: very bright light — LessWrong . This was immediately adopted at MIRI and (I think obviously) led to the Lumenator described by Eliezer three years later.