I’d like to see at least some work on how to talk about LW without implying insularity.
Name-drop like a motha...
“Did you read Predictably Irrational by Dan Ariely or Thinking Fast and Slow by Nobel laureate Daniel Kahneman? We study their field of predictable human thinking errors and try to figure out how best to apply those lessons to everyday human life so that we can learn how to make decisions that are more likely to achieve our goals.
“We talk some about Alan Turing’s idea that machines could one day become smarter than humans, and how shortly thereafter we might expect them to become more powerful than humans. One of the mathematicians who worked with him to crack the German Enigma Code, I.J. Good, explained that a smarter-than-human machine could use its intelligence to improve its own inteligence. And since neuroscientists like Paul Glimcher at NYU and Kent Berridge at U Michigan are learning that what humans care about is incredibly complex, it’s unlikely that we’ll be able to figure out how to program smarter-than-human machines to respect every little detail of what we care about.”
Or, more meta-ly, you’re not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn’t have to worry about such things, but then we’re living in the should-world rather than the real-world.
Name dropping is good solution for this but in my experience people very seldom read what you name drop and in certain circles this comes off a bit pretentious.
You can also exchange random Solomonoffs with the string “Solomonoff”. That way you can Solomonoff yourself out of any deep Solomonoff you find yourself in.
Name-drop like a motha...
“Did you read Predictably Irrational by Dan Ariely or Thinking Fast and Slow by Nobel laureate Daniel Kahneman? We study their field of predictable human thinking errors and try to figure out how best to apply those lessons to everyday human life so that we can learn how to make decisions that are more likely to achieve our goals.
“We talk some about Alan Turing’s idea that machines could one day become smarter than humans, and how shortly thereafter we might expect them to become more powerful than humans. One of the mathematicians who worked with him to crack the German Enigma Code, I.J. Good, explained that a smarter-than-human machine could use its intelligence to improve its own inteligence. And since neuroscientists like Paul Glimcher at NYU and Kent Berridge at U Michigan are learning that what humans care about is incredibly complex, it’s unlikely that we’ll be able to figure out how to program smarter-than-human machines to respect every little detail of what we care about.”
Or, more meta-ly, you’re not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn’t have to worry about such things, but then we’re living in the should-world rather than the real-world.
Name dropping is good solution for this but in my experience people very seldom read what you name drop and in certain circles this comes off a bit pretentious.
You can also exchange random Solomonoffs with the string “Solomonoff”. That way you can Solomonoff yourself out of any deep Solomonoff you find yourself in.