“I’d probably suggest writing a novel first.”
It blows my mind that nobody (?) has written a sci-fi novel on alignment yet.
“I’d probably suggest writing a novel first.”
It blows my mind that nobody (?) has written a sci-fi novel on alignment yet.
Working with others in a shared environment with scientific ground rules ensures that your biases and their biases form a non intersecting set
I liked your first point but come on here.
Lack of curiosity made people lose money to Madoff. This you already know—people did not due their due diligence.
Here’s what Bienes, a partner of Madoff’s who passed clients to him, said to the PBS interviewer for The Madoff Affair (before the 10 minute mark) when asked how he thought Madoff could promise 20%:
Bienes: ‘How do I know? How do you split an atom, I know that you can split them, I don’t know how you do it. How does an airplane fly? I don’t ask.’ ‘Did you ask him?’ ’Never! Why would I ask him? I wouldn’t understand it if he explained it!’
And a minute later: ‘Did you ever think to yourself, this is just too easy, this is too good?’ Bienes: ‘I said ‘I’m a little too lucky. Why am I so fortunate?’ And then I came up with the answer, my wife and I came up with the answer: ‘God wanted us to have this. God gave us this.’ ’
Are you kidding me? I’m staring right now, beside me, at a textbook chapter filled with catalogings of human values, with a list of ten that seem universal, with theories on how to classify values, all with citations of dozens of studies: Chapter 7, Values, of Chris Peterson’s A Primer In Positive Psychology.
LessWrong is so insular sometimes. Like lionhearted’s post Flashes of Nondecisionmaking yesterday—as if neither he nor most of the commenters had heard that we are, indeed, driven much by habit (e.g. The Power of Habit; Self-Directed Behavior; The Procrastionation Equation; ALL GOOD SELF HELP EVER), and that the folk conception of free will might be wrong (which has been long established; argued e.g. in Sam Harris’s Free Will).
If this comment was written by a bot that produces phrases maximizing the ratio of the number of usages of pleasant-dopamine-buzz-in-group LessWrong language to non-in-group language, it would produce something like this.
I say this even though I really appreciate the comment and think it has genuine insight.
Agreeing that it should have been in Discussion.
If this gets upvoted highly, I will update in favor of LessWrong continuing to become more in-group-y, more cutesy, and less attached-to-actual-change-y. It’s becoming so much delicious candy!
How can someone have such a good memory?
More like an exception handling routine that’s just checking for out-of-bounds errors.
Oh God. I love this place.
And this is why I love LessWrong, folks—sometimes. In other rationality communities—ones that conceived of rationality as something other than “accomplishing goals well”—this kind of post would be hurrah’d.
Why? Because dying is painful? Beyond that, I see them equivalently.
Hey guys, how about we debate who’s being egoistic about saving the world and who isn’t? That sounds like a really good way to use LessWrong and knowledge of world-saving.
Which is why I”m still puzzled by a simplistic moral dilemma that just won’t go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently “save” lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.
It seems to me like a pretty small probability that an AI not designed to self-improve will be the first AI that goes FOOM, when there are already many parties known to me who would like to deliberately cause such an event.
I know this is four years old, but this seems like a damn good time to “shut up and multiply” (thanks for that thoughtmeme by the way).
That is cute.. no? More childish than evil. He should just be warned that’s trolling.
There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma.
I just wanted to tell everyone that it is great fun to read this in the voice of that voice actor for the Enzyte commercial :)
This is wrong.
If you discard the emotionally-laden word “agenda” (in my experience, its usage always indicates negative affect toward the thing with the “agenda”), what you’re basically saying is this: Anyone or any organization that concludes that the evidence for something is strong and that it matters, and who consequently takes a stand—their conclusions should be thrown out a priori. You did say “effectively nullifies anything they say”—those are damn strong words. So what you’re implying, AFAICT, is that you only listen to ‘what someone has to say’ if they don’t come to a strong conclusion and become an advocate for change (despite that one would say you have a moral obligation to).
I’m disappointed to find this kind of thinking on LessWrong, to be honest, not least from one of the regulars.
Edit: specifically on the topic at hand, my initial response to yourbrainonporn.com is positive not only because of the comprehensive and well-cited posts I read on the homepage, but because of Gary Wilson’s response (about halfway down) here: http://www.yourbrainrebalanced.com/index.php?topic=2754.0 -- It’s clear that he really knows what he’s talking about, even when the average neurologist doesn’t. (I’m not saying I believe it’s perfect—I can see motivated cognition going on, and am disappointed in the lack of mention of selection bias—but from what I can tell he is… (removes sunglasses).… less wrong than the average expert.)
I know this is old. What is really meant by “does not help their case, either” is “it hurts their case that they don’t have formal training”. I vehemently disagree. Not that I think formal training is bad. Just that I think giving emphasis to this indirect indicator of their competence is misleading, because there’s plenty of direct evidence—if you read the site—that they ‘know what they’re talking about’.
It seems to me this could be a smartphone app. Whenever a person wants to make a prediction about a personal event, they click on the app and speak, with a pause between the thing and how likely you think it is. The app could just store verbatim text, separating question/answer, and timestamping recordings in case you want to update your prediction later. If you learn to specify when you think the outcome will occur, it can make a sound to remind you to check off whether it happened; otherwise it could remind you periodically, like at the end of every day. Why couldn’t it have data analysis tools to let you visualize calibration, or find useful patterns and alert you? Seems a plausible app to me.
I messaged Jim on a different platform and he promptly replied:
You can get a zipfile of card images from https://carddb.rationalitycardinality.com/card/export/images
Woot! I haven’t done this before but my plan is to order cheap, fast card sleeves from amazon, and also cheap playing cards, regular-print the card images, and do sleeve ← card-image on top of playing-card (for backing)
There’s also this currently-defunct link to buy a nicer print version than that, maybe the link will be fixed when you read this, idk: https://www.thegamecrafter.com/games/rationality-cardinality-beta-6