VAuroch
No one defines qualia clearly. If they did, I’d have a conclusion one way or the other.
I don’t see any difference between me and other people who claim to have consciousness, but I have never understood what they mean by consciousness or qualia to an extent that lets me conclude that I have them. So I am sometimes fond of asserting that I have neither, mostly to get an interesting response.
Nice to see someone taking the lead! I’ve been looking for something to work on, and I’d be proud to help rebuild LW. I’ll send you a message.
Huh. I think I’ve been doing this at my current (crappy, unlikely to lead anywhere, part-time remote contract programming) job. Timely!
I have heard this discussed for at least the last year, well before Stuart started his series, and would be very surprised if it was not true. I’d put down $30 to your $10 on the matter, pending an agreed-upon resolution mechanism for the bet.
Well, no posts are deleted. If you look at Main and sort chronologically, you can go through and count articles per time and what fraction of them are math-heavy (which should be easy to check from a once-over skim).
I think this is pretty much accepted wisdom in the rationalsphere. Several people, online and in person, have said things to the effect of “Tumblr is for socializing, private blogs are for commenting on whatever the blogger writes about, and LessWrong is for math-heavy things, quotes threads, and meetup scheduling.” But if you doubt it, you can absolutely check.
Yes, I agree completely. Honestly, I thought this line of reasoning was common knowledge in the rationalsphere, since I think I’ve seen it discussed a couple times on Tumblr and in person (IIRC, both in Portland, and in the Bay Area).
Back when LW was more active, there was much lower math density in posts here.
Point, but not a hard one to get around.
There is a theoretical lower bound on energy per computation, but it’s extremely small, and the timescale they’ll be run in isn’t specified. Also, unless Scott Aaronson’s speculative consciousness-requires-quantum-entanglement-decoherence theory of identity is true, there are ways to use reversible computing to get around the lower bounds and achieve theoretically limitless computation as long as you don’t need it to output results. Having that be extant adds improbability, but not much on the scale we’re talking about.
It’s easy if they have access to running detailed simulations, and while the probability that someone secretly has that ability is very low, it’s not nearly as low as the probabilities Kaj mentioned here.
Double-blind trials aren’t the gold standard, they’re the best available standard. They still don’t replicate far too often, because they don’t remove bias (and I’m not just referring to publication bias). Which is why, when considering how to interpret a study, you look at the history of what scientific positions the experimenter has supported in the past, and then update away from that to compensate for bias which you have good reason to think will show up in their data.
In the example, past results suggest that, even if the trial was double-blind, someone who is committed to achieving a good result for the treatment will get more favorable data than some other experimenter with no involvement.
And that’s on top of the trivial fact that someone with an interest in getting a successful trial is more likely to use a directionally-slanted stopping rule if they have doubts about the efficacy than if they are confident it will work, which is not explicitly relevant in Eliezer’s example.
You can claim that it should have the same likelihood either way, but you have to put the discrepancy somewhere. Knowing the choice of stopping rule is evidence about the experimenter’s state of knowledge about the efficacy. You can say that it should be treated as a separate piece of evidence, or that knowing about the stopping rule should change your prior, but if you don’t bring it in somewhere, you’re ignoring critical information.
Read the Tiffany Aching ones. They’re not just for children, but especially read them if you have or ever expect to have children. These are the stories on which baby rationalists ought to be raised.
It’s something Eliezer talks about in some posts; I associate it mainly with The Twelve Virtues and this:
Some people, I suspect, may object that curiosity is an emotion and is therefore “not rational”. I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on irrational epistemic conduct: “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.”
in-GovCo
un-GovCo, I believe?
Historically, this didn’t work out well. You know, back when the snake oil salesmen were literal and selling real snake oil, cocaine, and various low-dose toxic extracts. (I believe similar things happen in China today, but it’s more slanted toward traditional medicine and thus less likely to be toxic.)
Most likely cannot onboard volunteers quickly enough to be useful at this point; Thursday was the last day for volunteer signups, I believe.
Read that at the time and again now. Doesn’t help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.
In this case, “description of how my experience will be different in the future if I have or do not have qualia” covers it. There are probably cases where that’s too simplistic.