For what it’s worth, the argument I’d heard—not that I agree with it, to be clear—was that visitors/patrons weren’t the issue: the law was designed to essentially extend safe-work-environment laws to bars. Thus, it was the employees who were the at-risk party.
Technologos
Best I can tell, Science is just a particularly strong form (/subset) of Bayesian evidence. Since it attempts (when done well) to control for many potentially confounding factors and isolate true likelihoods, we can have more confidence in the strength of the evidence thus obtained than we could from general observations.
Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers’ framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy.
Philosophy doesn’t really solve questions in itself; instead, it allows others to solve them.
I wonder if “How does neurons firing cause us to have a subjective experience?” might be unintentionally begging Mitchell_Porter’s question. Best I can tell, neurons firing is having a subjective experience, as you more or less say right afterwards.
Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, “beliefs that the future will be just like the past” have a zero success rate.
Perhaps the inventions listed above suggest that it’s unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.
Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there’s no particular reason to reject those data.
That’s not uncommon. Villains act, heroes react.
I interpreted Eliezer as saying that that was a cause of the stories’ failure or unsatisfactory nature, attributing this to our desire to feel like decisions come from within even when driven by external forces.
I’m perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I’d modify the classes slightly, however:
Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.
The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes—and that over the course of all of human history! Once we update the “all of human history” reference class/prior to account for the last century—in which technology has developed faster than probably the previous millennium—the posterior ends up looking much more promising.
Agreed. Part of the reason I love reading Asimov is that he focuses so much on the ideas he’s presenting, without much attempt to invest the reader emotionally in the characters. I find the latter impairs my ability to synthesize useful general truths from fiction (especially short stories, my favorite form of Asimov).
I defer to Wittgenstein: the limits of our language are the limits of the world. We can literally ask the questions above, but I cannot find meaning in them. Blueness, computational states, time, and aboutness do not seem to me to have any implementation in the world beyond the ones you reject as inadequate, and I simply don’t see how we can speak meaningfully (that is, in a way that allows justification or pursues truth) about things outside the observable universe.
I believe this is from a tv special; I’m having trouble determining the relevance as well.
One possibility: the extended description of the story, rather than a simple statement of fact or belief, constitutes a warning about the power of contextual imagery in activating availability heuristics.
You repeat #10 as #11; the question as cited by Eliezer is as follows:
If you got hit by a meteorite, what would be the impact on FAI research? Would other people be able to pick it up from there?
While I don’t think you need to read it, per se, I have found sci fi to be of remarkable use in preparing me for exactly the kind of mind-changing upon which Less Wrong thrives. The Asimov short stories cited above are good examples.
I also continue to cite Asimov’s Foundation trilogy (there are more after the trilogy, but he openly said that he wrote the later books purely because his publisher requested them) as the most influential fiction works in pushing me into my current career.
Asimov thought it was his best story, too (or at least his favorite). Can’t say I disagree.
I strongly second Snow Crash. I enjoyed it thoroughly.
In what language or symbolic system would you do so? The Pioneer plaque and Voyager records both made an attempt in that direction, but I’m sure there’s a better way.
In one of my classes in college, we were asked to try to decipher the supposedly universal language of the Pioneer plaque, which should have been relatively easy insofar as we shared a species (and thus a neural architecture) with the creators. We got some of it, though not all, which is apparently better than many of the NASA scientists on the project!
I know they get overused, but Godel’s incompleteness theorems provide important limits to what can and cannot be proven true and false. I don’t think they apply to P vs NP, but I just note that not everything is falsifiable, even in principle.
My understanding is that such a story relies on trying to define the area of a point when only areas of regions are well-defined; the probability of the point case is just the limit of the probability of the region case, in which case there is technically no zero probability involved.
I like it. Sure would beat the hell out of a lot of the advice I’ve heard, and if nothing else it would be good training in changing our minds and in aggregating evidence appropriately.
Now that would be a great extension of the LW community—a specific forum for people who want to make rationalist life decisions like that, to develop a more personal interaction and decrease subjective social costs.
Also, more than votes are gained when demonizing smokers—there are also the smokers’ tax dollars.