Open Thread: December 2011
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.
- [Link] A gentle video introduction to game theory by 13 Dec 2011 8:52 UTC; 45 points) (
- 18 Dec 2011 20:49 UTC; 19 points) 's comment on Rational wart removal by (
- 20 Dec 2011 16:35 UTC; 3 points) 's comment on Welcome to Less Wrong! by (
- 1 Dec 2011 19:01 UTC; 1 point) 's comment on Tidbit: “Semantic over-achievers” by (
- 20 Dec 2011 16:20 UTC; 1 point) 's comment on Welcome to Less Wrong! by (
I’m not sure if it is “worth saying” but a google search for “Secret Bayesian Man” turned up nothing so I wrote this:
I deeply apologize.
Odds (20:1) are you will live to regret having written something so ridiculous.
My 9th Grade Teen Titans Fanfic is still on the internet.
Game on.
Still not regretting, what do I win?
That was a nice touch.
Karaoke
I love the idea of the open thread. So many things I would like to discuss, but that I don’t feel confident to actually make discussion posts on. Here’s one:
On Accepting Compliments
Something I learned, and taught to all my students is that when you are performing certain things (fire, hoops, bellydancing whatever), people are going to be impressed, and are going to compliment you. Even though YOU know that you are nowhere near as good as Person X, or YOU know that you didn’t have a good show, you ALWAYS accept their compliment. Doing otherwise is actually an insult to the person who just made an effort to express their appreciation to you. Anyways, you see new performers NOT following this advice, all the time. And I know why. It’s HARD to accept compliments, especially when you don’t feel deserving of them. But you have to learn to do it anyway, because it’s the right thing to do.
Same idea, said better by somebody else
I think this is applicable to all areas of life not just performing. In fact, doing some googling, I found a Life Hack on the subject. Some excerpts:
This link actually has sample dialogue, if that helps, but it is bellydance-centric.
Thank you; I’ve updated significantly based on this.
A common claimed Great Filter issue is that Earth-like planets need a large moon to stabilize their orbit. However, recent research seems to indicate that this view is mistaken. In general, this seems to be part of a general pattern where locations that are conducive to life seem more and more common(for another recent example see here) (although I may have some confirmation bias here?). Do these results force us to conclude that a substantial part of the Great Filter is in front of us?
No. The mainstream expectation has pretty much always been that locations conducive to life would be reasonably common; the results of the last couple of decades don’t overturn the expectation, they reinforce it with hard data. The controversy has always been on the biological side: whether going from the proverbial warm little pond to a technological civilization is probable (in which case much of the Great Filter must be in front of us) or improbable (in which case we can’t say anything about what’s in front of us one way or the other). For what it’s worth, I think the evidence is decisively in favor of the latter view.
Is it worth posting a series of videos that makes up a gentle introduction to the basics of game theory from the AI class on LW for people who aren’t in the class and aren’t very good at math?
This is the one of the videos from the series. There are also a few easy practical exercises and their solutions.
John Cheese from Cracked.com pulls out another few loops of bloodsoaked intestine and slaps them on the page as a ridiculously popular Internet humour piece: 9 YouTube Videos That Prove Anyone Can Get Sober. I hate watching video, and I sat down and watched the lot. Horrifying and compelling. I’ve been spending this afternoon reading the original thread. It’s really bludgeoning home to me just how much we’re robots made of meat and what a hard time the mind has trying to steer the elephant. Fighting akrasia is one thing—how do you fight an addiction with the power of your mind?
You set up an environment for yourself where the patterns of that addiction are less likely to drive you to do things you don’t endorse.
That’s obvious when the cravings aren’t eating your brain, but that thread is all about how hard that can be in practice.
It’s not all that obvious, even during the lucid periods. At least, I’ve never found it so… it’s very easy to fall into the trap of “I don’t need to fix the roof; it’s sunny out” and fail to take advantage of the lucid periods to set up systems that I can rely on in the crazy periods.
But the only way I’ve ever found that works in the long run is to devote effort during lucid periods to setting up the environment, and then hope that environment gets me through the crazy periods.
Natch, that depends on having lucid periods, and on something in the system being able to tell the difference (if not my own brain, then something else I will either trust during the crazy periods, or that can enforce compliance without my proximal cooperation).
If I can’t trust my own brain, and I also can’t trust anything else more than my own brain (in the short term), then I’m fucked.
I was musing on the old joke about anti-Occamian priors or anti-induction: ‘why are they sure it’s a good idea? Well, it’s never worked before.’ Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?
Well, in what sort of universe would every failure of X to appear that time interval make X that much more likely? It sounds a bit vaguely like the hope function but actually sounds more like an urn of balls where you sample without replace: every ball you pull (and discard) without finding X makes you a little more confident that next time will be X. Well, what kind of universe sees its possibilities shrinking every time?
For some reason, entropy came to mind. Our universe moves from low to high entropy, and we use induction. If a universe moved the opposite direction from high to low entropy, would its minds use anti-induction? (Minds seem like they’d be possible, if odd; our minds require local lowering of entropy to operate in an environment of increasing entropy, so why not anti-minds which require local raising of entropy to operate in an environment of decreasing entropy—somewhat analogous to reversible computers expending energy to erase bits.)
I have no idea if this makes any sense. (To go back to the urn model, I was thinking of it as sort of a cellular automaton mental model where every turn the plane shrinks: if you are predicting a glider as opposed to a huge turing machine, as every turn passes and the plane shrinks, the less you would expect to see the turing machine survive and the more you would expect to see a glider show up. Or if we were messing with geometry, it’d be as if we were given a heap of polygons with thousands of sides where every second a side was removed, and predicted a triangle—as the seconds pass, we don’t see any triangles, but Real Soon Now… Or to put it another way, as entropy decreases, necessarily fewer and fewer arrangements show up; particular patterns get jettisoned out as entropy shrinks, and so having observed a particular pattern, it’s unlikely to sneak back in: if the whole universe freezes into one giant simple pattern, the anti-inductionist mind would be quite right to have expected all but one observations to not repeat. Unlike our universe, where there seem to be ever more arrangements as things settle into thermal noise: if a arrangement shows up we’ll be seeing a lot of it around. Hence, we start with simple low entropy predictions and decreases confidence.)
Boxo suggested that anti-induction might be formalizable as the opposite of Solomonoff induction, but I couldn’t see how that’d work: if it simply picks the opposite of a maximizing AIXI and minimizes its score, then it’s the same thing but with an inverse utility function.
The other thing was putting a different probability distribution over programs, one that increases with length. But while you are forbidden uniform distributions over all the infinite integers, and you can have non-uniform decreasing distributions (like the speed prior or random exponentials), it’s not at all obvious what a non-uniform increasing distribution looks like—apparently it doesn’t work to say ‘infinite-length programs have p=0.5, then infinity-1 have p=0.25, then infinity-2 have p=0.125… then programs of length 1⁄0 have p=0’.
How can they possibly know/think that ‘it’ has never worked before? That assumes reliability of memory/data storage devices.
I don’t see how these anti-Occamians can ever conclude that data storage is reliable.
If they believe data storage is reliable, they can infer whether or not data storage worked in the past. If it worked, then data storage is probably not reliable now. If it didn’t work then it didn’t record correct information about the past. In neither case is the data storage reliable.
(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)
Less Wrong is a sect
Funny comic from bouletcorp: Physics of a Pixelated World
Cf. Eric Flint, I’ve always found the idea of bringing technology back in time very interesting. Specifically, I’ve always wondered what technology I could independently invent and how early I could invent it. Of course, the thought experiment requires me to handwave away lots of concerns (like speaking the local language, not being killed as a heretic/outsider, and finding a patron).
Now, I’m not a scientist, but I think I could invent a steam engine if there was decent metallurgy already. Steam engine: Fill large enclosed container with water, heat water to boiling, steam goes through a tube to turn a crank, voila—useful work. So, 1000s in Europe, maybe?
I’d like to think that I could inspire someone like Descartes to invent calculus. But there’s no way I could invent in on my own.
Anyone else ever had similar thoughts?
Of course; it’s a common thought-experiment among geeks, ever since A Connecticut Yankee. There’s even a shirt stuffed with technical info in case one ever goes back in time.
(FWIW, I think you’d do better with conceptual stuff like Descartes and gravity, which you can explain to the local savant and work on hammering out the details together; metallurgy is hard, and it’s not like there weren’t steam engines before the industrial revolution—they were just uselessly weak and expensive. Low cost of labor means machines are too expensive to be worth bothering with.)
You’re probably right, but other than proving the Earth is round (which is not likely to need proving unless I go far back), there’s not a lot of useful things I can demonstrate to the savant. And telling the savant about germ theory or suchlike without being able to demonstrate it seems pretty useless to me.
I’ve always wondered how much ‘implicit’ knowledge we can take for granted. For example, the basic idea of randomized trials, while it has early forebears in Arabic stuff in the 1000s or whenever, is easy to explain and justify for any LWer while still being highly novel. As well, germ theory is tied to microscopic life and non-spontaneous generation (would one remember Pasteur’s sealed jar experiments or be able to reinvent them?) I was just reading a book on colonial Americans in London when I came across a mention of the discoverer of carbon dioxide; I reflected that I would have been easily able to demonstrate it just with a sealed jar and a flame and a mouse and a plant, but am I atypical in that regard? Would other people, even given years or decades pondering, be able to answer the question ‘how the deuce do I show these past people that air isn’t “air” but oxygen and carbon dioxide?’
I guess you could answer this question just by surveying people on how they would demonstrate such classic simple results.
No way, unless perhaps you’re an amateur craftsman with a dazzling variety of practical skills and an extraordinary talent for improvization. And even if you managed to cobble together something that works, you likely wouldn’t be able to put it to any profitable use in the given economic circumstances.
When you carefully consider the implications of those concerns, you’ll find that the “I” quickly loses its content when projected back in time to an earlier era. In short, it’s a question calling for a pseudo-proposition in answer.
I’m usually a lurker here. I generally spend a little too much time on this site. I’m making a personal resolution to leave the site alone for the rest of the day, whenever i read an article here and find that i have nothing to say about it. Under this policy, i expect that i will be spending less time here, and also that i will be contributing more.
For those interested in the Big Five: “The Big-Five Trait Taxonomy: History, Measurement, and Theoretical Perspectives”.
Is there any easy way to report spam comments?
There used to be a “Report” button. I don’t see it anymore, but it’s possible that it just doesn’t appear for me because I’m a mod. If you find spam comments, you’re welcome to PM me a link so I can ban it; I can’t speak for other mods.
There is isn’t normally a report button. There is one when I’m viewing replies to my own comments via my inbox, but that seems to be the only time its available.
I’m curious what happened to SarahC. I enjoyed her presence, but I haden’t seen her recently and I notice she’s deleted her account (http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/3f00). Anyone know what happened?
SarahC is alive and well; she deleted her account for personal reasons.
I am glad she’s well. Was this an “evaporative cooling” sort of event (e.g. personal conflicts with other LWers) or unrelated personal reasons? (If you/she don’t/doesn’t mind my asking)
Sarah says:
I’m not sure of the extent to which Sarah wants her reasons publicized; I will ask her next time we talk and point her to this thread.
I’ve been wondering for several weeks now how to pronounce 3^^^3.
Three to the to the to the three is how I do it.
I generally just look at it as a picture, so as to not waste time sub-vocalizing.
I generally say “triple-hat” to myself when I read it.
3↑↑↑3. The use of ^ instead is a limitation of ASCII.
Knuth called it the “triple-arrow” operator. I don’t see any conjugations necessary, so “three triple-arrow three”.
”Three pentated by three” also works.
It might confuse newbies that regular meetups don’t appear on the meetup map (only irregular ones), and this can be easily fixed. Is there any reason to leave it as is?
I’m having a wall-banging philosophical disputation with someone over the word “scientism”, but mostly over qualia and p-zombies, here. I ask your help on trying to work out if any commonality is resolvable in between the spittle-flecked screaming at each other. (I would suggest diving in only if you have familiarity with the participants—assume we’re all smart, particularly the guy who’s wrong.)
I feel like I ought to admire your tenacity on that thread.
I don’t, actually, but I feel like I ought to.
Anyway… no, I haven’t a clue how you might go about resolving whatever commonality might exist there. Then again, I’ve never been able to successfully make contact across the qualia gulf, even with people whose use of the English language, and willingness to stick to a single topic of discussion, is better aligned with mine than yours and Lev’s seem to be.
Heh. Am I making no sense either? I’m sticking with it because I’ve known Lev for decades and I’m a huge fan of his and I have little to no idea what the fuck he’s on about with this crazy moon language. The p-zombie argument proves—proves—magic, rather than demonstrating that philosophers are easily convinced of rubbish? What?
(I have little doubt he’s thinking the same of me.)
The thread is still going, by the way. Twenty days later. None of us know when to give up.
Well, you’re making sense to me, but that’s perhaps due to the fact that I basically would be saying the same things if I somehow found myself in that conversation. (Which could easily happen… hell, I’ve gotten into that conversation on LW.)
I think you would all benefit from drop-kicking about half the terms you’re using, and unpacking them instead… it seems moderately clear that you don’t agree on what a p-zombie is, for example. But I would be surprised if he agreed to that.
That said, I don’t think he’d agree with your summary of his position.
Does he always have such eccentric syntax?
No, he doesn’t. He’s spent most of his life in politics and knows how to speak with fine-honed bluntness. That’s why people thought his first comment was parody.
I’ve identified the feeling: that sinking realisation that someone you respected is a dualist.
There is an unfortunate equivocation in the word theory (compare “Theory of Evolution” to “Just War Theory”). Popper says that theory can only be called scientific if it is falsifiable. Using that conceptual terminology, Freudian theory is pseudoscience, not a scientific theory. But many things that the vernacular calls theories are not falsifiable. (What would it mean to falsify utilitarian theory?)
Does that mean that we can’t talk about moral theories? What word should we use instead? Because it seems like talking about moral theories is doing something productive.
For some context, I’m starting this post to separate off this conversation from a distinct conversation I’m having here
And just wait until you get to “critical theory”. I fear the word “theory” in English is indeed stretched in a continuous fog from the hardest of physics to the foggiest of spurious postmodernist notions, with little in the way of joins to carve it at. Thus, cross-domain equivocation will be with us for a while yet.
And? Invoking critical theory doesn’t scare me off. I’m as post-modern as you are likely to meet here at LW.
I agree that the the word “theory” needs an adjective or it is underspecified. Scientific theories are different from moral theories. Let me repeat: If I can’t talk about the “theory” of utilitarianism, what word should I use instead to capture the concept?
I think we’re furiously agreeing here. I have no problem with you using the word “theory” there, but I do think some theories have more explanatory power (which I think of as “better”) than others, wherever we are on the spectrum I posit from physics to fog. My interests are largely at the foggy end and how to come up with theories with explanatory power at the foggy end is something I’m presently wrestling with.
I personally would prefer to use the word “theory” to mean “a scientific theory that is, by definition, falsifiable”. But it’s not a strong preference; I merely think that it helps reduce confusion. As long as we make sure to define what you mean by the word ahead of time, we can use the word “theory” in the vernacular sense, as well.
Regarding moral theories, I have to admit that my understanding of them is somewhat shaky. Still, if moral theories are completely unfalsifiable, then how do we compare them to discover which is better ? And if we can’t determine which moral theories are better than others, what’s the point in talking about them at all ?
I said earlier that Utilitarianism is more like an algorithm than like a scientific theory; the reason I said that is because Utilitarianism doesn’t tell you how to obtain the utility function. However, we can still probably say that, given a utility function, Utilitarianism is better than something like Divine Command—or can we ? If we can, then we are implicitly looking at the results of the application of both of these theories throughout history, and evaluating them according to some criteria, which looks a lot like falsifiability. If we cannot, then what are those moral theories for ?
It should be noted that Utilitarianism(Ethical Theory) states that the outputs of Utilitarianism(algorithm) constitute morality.
Oh… so does Utilitarianism(Ethical Theory) actually prescribe a specific utility function ? If so, how is the function derived ? As I said, my understanding of moral theories is a bit shaky, sorry about that.
When Utilitarianism was proposed, Mill/Bentham identified it as basically “pleasure good / pain bad”. Since then, Utilitarianism has pretty much become a family of theories, largely differentiated by their conceptions of the good.
One common factor of ethical theories called “Utilitarianism” is that they tend to be agent-neutral; thus, one would not talk about “an agent’s utility function”, but “overall net utility” (a dubious concept).
“Consequentialism” only slightly more generally refers to a family of ethical theories that consider the consequences of actions to be the only consideration for morality.
Thanks, that clears things up. But, as you said, “overall net utility” is kind of a dubious concept. I suspect that no one had figured out a way yet to compute this utility function in a semi-objective way… is that right ?
So would I. But it’s just an ambiguous word in English that means different things in different places. As I take it into the extremely foggy areas that also use the word “theory”, I’m going for something like “has explanatory power”.
Just a quick definition here: When people say moral theory, they mean the procedure(s) they use to generate their terminal values (i.e. the ends you are trying to achieve). Instrumental values (i.e. how to achieve your goals) are much less troublesome.
I’m not sure that the consensus here is that all moral theories are unfalsifiable (although I believe that is a fact about moral theories). If theories are unfalsifiable, then comparison from some “objective” position is conceptually problematic (which I expect is why politics is the mind-killer).
We still make decisions, and I think we are right to say that the decisions are “moral decisions” because they have moral consequences. Thus, one reason to discuss moral theories is to determine [as a descriptive matter] what morality one follows, in some attempt to be internally consistent.
Understood, thanks.
Let’s go with what you believe, then, and if the consensus wants to disagree, they can chime in :-)
Are you saying that moral theories are descriptive, and not prescriptive ? In this case, discussing moral theories is similar to discussing human psychology, or cognitive science, or possibly sociology. That makes sense to me, though I think that most people would disagree. But, again, if this is what you believe as well, then we are in agreement, and the consensus can chime in if it feels like arguing.
New experiment supports evopsych idea that some out-group prejudice is related to disease risk (though I wish it had been controlled with a state of generalized non-disease stress to see whether it’s just stress that increases prejudice).
I’m interested in conducting a simple, informal study requiring a moderate number of responses to be meaningful. Specifically, I want to look at some aspects of the “wisdom of the crowd”. I’m new here, so I want to ask first: is LessWrong Discussion a good place to put things like this that ask people to take a quick survey in the name of satisfying my curiosity? Are there other websites where this is appropriate?
I’m sorry that this comment was missed when you wrote it. But to answer your question, i say go for it! Submit it as a Discussion article and see how it goes.
Thanks for the reply—I’ll be posting it soon. Suggestions for other places that like to participate in thing like this would also be appreciated.
If we believe that our conscious experience is a computation, and we hold a universal prior which basically says that the multiverse consists of all Turing machines, being run in such a fashion that the more complex ones are less likely, it seems to be very suggestive for anthropics.
I visualize an array of all Turing machines, either with the simpler ones duplicated (which requires an infinite number of every finite-length machine, since there are twice as many of each machine of length n as of each machine of length n+1), or the more complex ones getting “thinner,” with their outputs stretching out into the future. Next, suppose we know we’ve observed a particular sequence of 1′s and 0′s. This is where I break with Solomonoff Induction—I don’t assume we’ve observed a prefix. Instead, assign each occurrence of the sequence anywhere in the output of any machine a probability inversely proportional to the complexity of the machine it’s on (or assign them all equal probabilities, if you’re imagining duplicated machines), normalized so that you get total probability one, of course. Then assign a sequence a probability of coming next equal to the sum of the probabilities of all machines where that sequence comes next.
Of course, any sequence is going to occur an infinite number of times, so each occurrence has zero probability. So what you actually have to do is truncate all the computations at some time step T, do the procedure from the previous paragraph, and hope the limit as T goes to infinity exists. It would be wonderful if you could prove it did for all sequences.
To convert this into an anthropic principle, I assume that a given conscious experience corresponds to some output sequence, or at least that things behave as if this were the case. Then you can treat the fact that you exist as an observation of a certain sequence (or of one of a certain class of sequences).
So what sort of anthropic principle does this lead to? Well, if we’re talking about different possible physical universes, then after you weight them for simplicity, you weight them for how dense their output is in conscious observer-moments (that is, sequences corresponding to conscious experiences). (This is assuming you don’t have more specific information. If you have red hair, and you know how many conscious experiences of having red hair a universe produces, you can weight by the density of that). So in the Presumptuous Philosopher case, where we have two physical theories differing in the size of the universe by an enormous factor, and agreeing on everything else, including the population density of conscious observers, anthropics tells us nothing. (I’m assuming that all stuff requires an equal amount of computation, or at least that computational intensity and consciousness are not too correlated. There may be room for refinement here). On the other hand, if we’re deciding between two universes of equal simplicity and equal size, but different numbers of conscious observer-moments, we should weight them according to the number of conscious observer-moments as we would in SIA. In cases where someone is flipping coins, creating people, and putting them in rooms, if we regard there as being two equally simple universes, one where the coin lands heads and one where the coin lands tails, then this principle looks like SIA.
The main downside I can see to this framework is that it seems to predict that, if there are any repeating universes we could be in, we should be almost certain we’re in one, and not in one that will experience heat death. This is a downside because, last I heard, our universe looks like it’s headed for heat death. Maybe earlier computation steps should be weighted more heavily? This could also guarantee the existence of the limit I discussed above, if it’s not already guaranteed.
Transhumanism is important—it’s the only way we can live up to the possibilities offered by our toys.
When _ozymandias posted zir introduction post a few days ago, I went off and binged on blogs from the trans/men’s rights/feminist spectrum. I found them absolutely fascinating. I’ve always had lots of sympathy for transgendered people in particular, and care a lot about all those issues. I don’t know what I think of making up new pronouns, and I get a bit offput by trying to remember the non-offensive terms for everything. For example, I’m sure that LGBT as a term offends people, and I agree that lumping the T with the LGB is a bit dubious, but I don’t know any other equivalent term that everyone will understand. I’m going to keep using it.
However, I don’t currently know any LGBT people who I can talk to about these things. In particular, the whole LGBT and feminist and so on community seems to be prone to taking unnecessary offense, and believing in subjectivism and silly things like that.
So I’d really like to talk with some LWers who have experience with these things. I’ve got questions that I think would be better answered by an IM conversation than by just reading blogs.
If anyone wants to have an IM conversation about this, please message me. I’d be very grateful.
I’m writing a discussion post titled “Another Way of Thinking About Mind-States”. I’m not sure at all whether my draft explains what I’m talking about clearly enough for anyone reading it to actually understand it, so I’d appreciate a beta volunteer to take a look at it and give me some feedback. If you’d like to beta, just reply here or send me a PM. Thanks!
For reasons I won’t spoil ahead of time, I’m reasonably certain most LWers will really enjoy this song.
(I also expect people will enjoy this one, this one, this one, and probably most others by this songwriter, but that’s less a LW-specific thing and more a general awesomeness thing.)
Really? It seems to be a guy humiliating himself due to lack of social skills then burying his head in denial by fantasizing about science. It’s degrading to those who would like to attribute their love of science to more than an inability to succeed in the conventional social world.
Ok, it gets points for the ending where he is an evil cyborg overlord rather than a hero. ;)
Apologies for taking so long to respond to this. I was trying to figure out what inferential distance was separating us that you would have such a divergent reaction. Here is my best attempt at explanation.
My take on the song is that its tone is ironic sincerity. That is, I take it to be self-deprecating but basically sympathetic. It’s sort of like an “It gets better” video for nerds (though it predates the actual It Gets Better campaign by several years), except that the specific scenario isn’t meant to be taken literally. I also don’t see anything it the lyrics that suggests that social ineptitude is the only reason the protagonist likes science.
Do these open treads serve as places to make comments or ask small questions? Personally, I was reading Luke’s new Q and A and I was thinking that I would like to have a thread full of people’s questions. If the purpose of this thread is for comments and not questions, should we make a new recurring-monthly post?
Random Thought Driving Home:
I hate when all my programmed radio stations are on commercial. Why does this always happen?
Say a radio station spends 25% of it’s air time playing commercials. This sounds pretty conservative. It would mean that for every 45 minutes of music, it plays 15 minutes of commercials.
I have 6 pre-programmed stations. That means if the commercials were evenly spaced out, they would only ALL have commercials on .25 ^ 6 = 0.00024 = .02% of the time.
Say I spend an hour a day driving. Then only .014 hours, or 0.86 seconds, of that time should be of all pre-programmed stations on commercials.
I am pretty sure that more than a second per hour has all-commercials, leaving me with two possible interpretations:
1) I notice the all-commercials times a lot, and so over-estimate their prominence, OR
2) Radio stations are evil, and they all play commercials at the same time (say every o’clock and -thirty) on purpose, so that the listener HAS to listen to commercials.
....Evil radio stations.
Another option there might be that certain commercial timings emerge more or less naturally from the constraints on the problem. If radio stations schedule time blocks by hours or some 1/n division of hours, for example, there’s going to be a break between segments at every N:00 -- a natural place to put commercials.
This would be a Schelling point, except that I doubt most radio stations think of commercial timing relative to other stations in terms of game-theoretic advantage.
Isn’t it a matter of news being played at clear, easyly recognizable times like 00/30? Whoever wants to hear them fully WILL have to listen to at least few secounds of commercial; it seems like a common pattern to just switch it on some minuted beforehand so as not to spend extra attention to the exact time for switching on. For most people some commercials before news are an absolutely acceptable tradeoff.
I am writing a discussion post called “On “Friendly” Immortality” and am having some difficulties getting my thoughts fully verbalized. I would appreciate a volunteer that’s willing to read it before I post and provide feedback. If you are willing to be my “beta” either reply to this comment, or send me a pm. Thank you!
The title sounds interesting, I’d be willing to provide feedback.
Thank you both! Will work on it soon, and send it to you.
Beta me.
I would be willing to beta.
Beta me.
I’ll beta.