I always forget that...thanks.
bartimaeus
I don’t have time to write a full report, but Less Wrong Montreal had a meetup on the 22nd of January that went well. Here’s the handout that we used; the exercise didn’t work out too well because we picked an issue that we all mostly agreed on and understood pretty well. A topic where we disagreed more would have been more interesting (afterwards I thought “free will” might have been a good one).
Thanks for pointing it out, fixed.
I run the Montreal Less Wrong meetup, which for the last few months has started structuring the content of our meetups with varying degrees of success.
This was the first meetup that was posted to meetup.com in an effort to find some new members. There were about 12 of us, most of which were new and had never heard of Less Wrong before; although this was a bit more than I was expecting, the meetup was still a really good introduction to Less Wrong/rationality and was appreciated by all those that were present.
My strategy for the meetup was to show a concrete exercise that was useful and that gave a good idea what Less Wrong/rationality was about. This is a handout I composed for the meetup to explain the exercise we were going to be doing. It’s a five-second-level breakdown of a few mental skills for changing your mind when you’re in an argument; any feedback on the steps I listed is appreciated, as no one reviewed them before I used them. People found the handout to be useful, and it gave a good idea of what we would be trying to accomplish.
The meetup began by going around and introducing ourselves, and how we came to find the meetup. Some general remarks about the demographics:
The attendees were 100% male. There were a few women who were going to attend, but cancelled at the last minute.
Only two out of the 12 didn’t have a background in science. The science backgrounds included math, biology, engineering and others.
After a quick overview of what rationality is, people wanted to go through the handout. We read through each of the skills, several of which sparked interesting discussions. Although the conversation went off on tangents often, the tangents were very productive as they served to explain what rationality is. The tangents often took the form of people discussing situations where they had noticed people reacting in the ways that are described in the handout, and how someone should think in such cases.
The exercise that is described on the second page of the handout was not successful. I had been trying to find beliefs that are not too controversial, but might still cause people to disagree with them. Feedback from the group indicated that I could have used more controversial beliefs (religion, spirituality, politics, etc) as the feelings evoked would have been more intense and easier to to notice; however, that might also have offended more people, so I’m not sure whether that would have been better or not. If I were to run this meetup again, I would rethink this exercise.
The meetup concluded with me giving a brief history of Less Wrong, and mentioning HPMOR and the sequences. I provided everyone with some links to relevant Less Wrong material and HPMOR in the discussion section of the meetup group afterwards.
Let me know if you have any questions or comments, any feedback is appreciated!
I like this idea; seeing as I have a meetup report to post, I just started a monthly Meetup Report Thread. Hopefully, people will do what you describe.
That’s true, those points ignore the pragmatics of a social situation in which you use the phrase “I don’t know” or “There’s no evidence for that”. But if you put yourself in the shoes of the boss instead of the employee (in the example given in “I don’t know”), where even if you have “no information” you still have to make a decision, then remembering that you probably DO know something that can at least give you an indication of what to do, is useful.
The points are also useful when the discussion is with a rationalist.
The post What Bayesianism Taught Me is similar to this one; your post has some elements that that one doesn’t have, and that one has a few that you don’t have. Combining the two, you end up with quite a nice list.
I think “seems like a cool idea” covers that; it doesn’t say anything about expected results (people could specify).
I don’t see how, because the barriers aren’t clearly defined, they become irrelevant. There might not be a specific point where a mind is sentient or not, but that doesn’t mean all living things are equally sentient (Fallacy of Grey).
I think Armstrong 4, rather than make his consideration for all living things uniform, would make himself smarter and try to find an alternate method to determine how much each living creature should be valued in his utility function.
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
Ok, I see what your concern is, with the hype around Soylent everyone’s opinion is skewed (even if they’re not among the fanboys).
You decided above that it wasn’t worth your time to try your own self-experiments with it. What if someone else were to take the time to do it? I like the concept but agree with the major troubles you listed above, and I have no experience with designing self-experiments. But maybe I’ll take the time to try and do it properly, long-term, with regular blood tests, noting what I’ve been eating for a couple months before starting, taking data about my fitness levels, etc. Of course, I would need to analyze the risk to myself beforehand.
What would you like to see done differently? You mentioned the more thorough self-experimentation he could have done (really should have done), but there’s still someone else who could step up to the plate and do some self-testing.
Thorough studies? Those might also be done some time in the future, whether or not they’re funded by Rob (not sure about this point, there might not be an incentive to do so once it’s being sold).
Sure, Rob jumped the gun and hyped it up. But most of the internet is already a giant circle-jerk. Doesn’t stop people from generating real information, right?
I don’t have enough experience to even give an order of magnitude, but maybe I can give an order of magnitude of the order of magnitude:
Right now, the probability of Christianity specifically might be somewhere around 0.0000001% (that’s probably too high even). One hour post judgement-day, it might rise to somewhere around 0.001% (several orders of magnitude).
Now let’s say the world continues to burn, I see angels in the sky, get to talk to some of them, see dead relatives (who have information that allows me to verify that they’re not my own hallucinations), and so on...the probability could bring the hypothesis to one of the top spots in the ranking of plausible explanations.
...assuming that I’m still free to experiment with reality and not chained and burning. Also assuming that I actually take the time to do this as opposed to run and hide.
The continuation of the burning makes the hallucination hypothesis less probable, for as long as it continues. Also, if it continues past the laws of physics, as you point out.
What do you expect will happen? Do you think lots of people are going to get very sick by going on a Soylent-only diet immediately, not monitoring their health closely, and ending up with serious nutritional deficiencies? That’s one of the more negative scenario, but I honestly don’t know how likely that is. I think people are likely to do at least one of three things:
Monitor their health more closely (especially on a soylent-only diet),
Only replace a few meals with Soylent (not more than, say, 75%),
Return to normal food or see a doctor if a serious deficiency occurs.
Then again, I may have too much confidence in people’s common sense. Rob is definitely marketing it as a finished product and a miracle solution.
The concept is good, but the methodology could have been significantly better. It has lots of potential, and the real danger is limited to those that will be consuming ONLY Soylent for extended periods. Using it to replace a meal or two a day, and having a complete meal every day, shouldn’t be dangerous (I think).
What confuses me about the negativity is, what’s so bad about the current situation? The earliest of adopters will serve as a giant trial, and if there are problems they’ll come up there.
Also: people who intend to switch to JUST soylent should be monitored by a doctor or a nutritionist, at least for the first while. And post it either here or on the Solyent board. I am very interested to hear some anecdata.
Beware of identifying in general. “We” are all quite different. Few if any of “us” can be considered reasonably rational by the standards of this site.
That’s a good point, which I’ll watch out for in the future.
With a sizable minority of theists here, why is this even an issue, except maybe for some heavily religious newcomers?
One thing I didn’t specify is that this applies to discussions with non-LessWrongers about religion (or about LessWrong). On the site, there’s no point in bothering with this identification process, because we’re more likely to notice that we’re generalizing and ask for an elaboration.
I’m thinking of making a Discussion post about this, but I’m not sure if it has already been mentioned.
We’re not atheists—we’re rationalists.
I think it’s worth distinguishing ourselves from the “atheist” label. On the internet, and in society (what I’ve seen of it, which is limited), the label includes a certain kind of “militant atheist” who love to pick fights with the religious and crusade against religion whenever possible. The arguments are, obviously, the sames ones being used over and over again, and even people who would identify as atheists don’t want to associate themselves to this vocal minority that systematically makes everyone uncomfortable.
I think most LessWrongers aren’t like that, and don’t want to attach a label to themselves that will sneak in those connotations. Personally, I identify as a rationalist, not an atheist. The two things that distinguish me from them:
Social consequentialism. I know conversations about religion are often not productive, so I’m quick to tap out of such discussions. Unlike a lot of atheists, I could, in principle, be persuaded to believe otherwise (given sufficient evidence). If judgement day comes and I see the world burning around me, I will probably first think that I’ve gone insane; but the probability I assign to theism will increase, as per Bayes’ Theorem.
Note that this feeling is dependent on who you know, so I might be a minority in the label I see attached to atheism.
What do people think? I wrote this pretty quickly, and could take the time to a more coherent text to post in discussion.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person’s best interest. And it will depend on how people use it.
I agree, and I’ll keep that in mind. The topic is extremely broad, though, so I don’ t know how much time I’ll have to focus on it. I’m actually thinking of having several meetups on this, depending on people’s interest.