Do you mean these metaanalyses?
aleksiL
Interesting. I thought that my thinking would be mostly words, like inner monologue or talking to myself. Now that I pay attention it is more like images, emotions, concepts constantly flashing through my head, most gone before I even notice them.
Introspectively it seems that my thinking has changed and I just haven’t noticed until now. Or that my conscious mind has finally learned to shut up and pay attention.
aversion to discomfort
This made me think of what pjeby calls the pain brain. In short, our actions can be motivated by either getting closer to what we want (pull) or away from what we try to avoid (push). Generally, push overrides pull, so you may not even notice what you want if you’re too busy avoiding what you don’t.
It may be useful to explore your goals and motivations with relaxed mental inquiry and critically examine any fears or worries that may come up.
I recently finished the book Mindset by Carol S. Dweck. I’m currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I’d like to hear others’ reactions.
The book seems to explain a lot about people’s attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone—I’ve actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit the most.
Also, I’d appreciate pointers on how to find out if the book is being translated to Finnish.
Edit: Fixed markdown and grammar.
- May 17, 2010, 4:19 AM; 1 point) 's comment on Aspergers Poll Results: LW is nerdier than the Math Olympiad? by (
How do I know I’m not simulated by the AI to determine my reactions to different escape attempts? How much computing power does it have? Do I have access to its internals?
The situation seems somewhat underspecified to give a definite answer, but given the stakes I’d err on the side of terminating the AI with extreme prejudice. Bonus points if I can figure out a safe way to retain information on its goals so I can make sure the future contains as little utility for it as feasible.
The utility-minimizing part may be an overreaction but it does give me an idea: Maybe we should also cooperate with an unfriendly AI to such an extent that it’s better for it to negotiate instead of escaping and taking over the universe.
As I understand Eliezer’s position, when babyeater-humans say “right”, they actually mean babyeating. They’d need a word like “babysaving” to refer to what’s right.
Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we’d have a word for its output instead.
I think Eliezer sees translating babyeater word for babyeating as “right” as an error similar to translating their word for babyeaters as “human”.
Blow up the paradox-causing FTL? Sounds like that could be weaponized.
I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a “Relativity and FTL travel” FAQ.
I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?