Working in AI, cognitive science and decision theory are of professional interest to me. This community is interesting to me mostly out of bafflement. It’s not clear to me exactly what the Point of it is.
I can understand the desire for a place to talk about such things, and a gathering point for folks with similar opinions about them, but the directionality implied in the effort taken to make Less Wrong what it is escapes me. Social mechanisms like karma help weed out socially miscued or incompatible communications, they aren’t well suited for settling questions of fact. The culture may be fact-based, but this certainly isn’t an academic or scientific community, it’s mechanisms have nothing to do with data management, experiment, or documentation.
The community isn’t going to make any money(unless it changes) and is unlikely to do more than give budding rationalists social feedback(mostly from other budding rationalists). It potentially is a distribution mechanism for rationalist essays from pre-existing experts, but Overcoming Bias is already that.
It’s interesting content, no doubt. But that just makes me more curious about goals. The founders and participants in LessWrong don’t strike me as likely to have invested so much time and effort, so much specific time and effort getting it to be the way it is, unless there were some long-term payoff. I suppose I’m following along at this point, hoping to figure that out.
Handle: outlawpoet
Name: Justin Corwin
Location: Playa del Rey California
Age: 27
Gender: Male
Education: autodidact
Job: researcher/developer for Adaptive AI, internal title: AI Psychologist
aggregator for web stuff
Working in AI, cognitive science and decision theory are of professional interest to me. This community is interesting to me mostly out of bafflement. It’s not clear to me exactly what the Point of it is.
I can understand the desire for a place to talk about such things, and a gathering point for folks with similar opinions about them, but the directionality implied in the effort taken to make Less Wrong what it is escapes me. Social mechanisms like karma help weed out socially miscued or incompatible communications, they aren’t well suited for settling questions of fact. The culture may be fact-based, but this certainly isn’t an academic or scientific community, it’s mechanisms have nothing to do with data management, experiment, or documentation.
The community isn’t going to make any money(unless it changes) and is unlikely to do more than give budding rationalists social feedback(mostly from other budding rationalists). It potentially is a distribution mechanism for rationalist essays from pre-existing experts, but Overcoming Bias is already that.
It’s interesting content, no doubt. But that just makes me more curious about goals. The founders and participants in LessWrong don’t strike me as likely to have invested so much time and effort, so much specific time and effort getting it to be the way it is, unless there were some long-term payoff. I suppose I’m following along at this point, hoping to figure that out.
I suspect we’re going to hear more about the goal in May. We’re not allowed to talk about it, but it might just have to do with exi*****ial r*sk...
deleted