yes, that’s what I meant; thank you.
witzvo
Sorry I guess it wasn’t clear. I was contrasting two naive utility functions: a flat one which adds up the utilons of all people versus one that only counts the utilons of stock brokers. I’m not asserting that one or the other is “right”. Both utilities would have some additional term giving utility for preserving resources, but I’m not being concrete about how that’s factored in. [I’m also not addressing in any depth the complications that a full utilitarian calculation would need like estimated discounted future utilons, etc.] Did I clear it up or make it worse?
I don’t know about documentation, but you can start looking here.
The “canceled out” part depends on whether your interested in the utility of stockholders and the reduced resource consumption of the manufacturing process or the utility of the general population which might have to consume less of the product than they’d otherwise be able (because of higher prices) or more generally have less capital left to buy other things they need/want. Monopolies with regulated price structures sometimes work, I guess, though it’s complicated.
One possibility is computer games, e.g. I’ve certainly lost a good chunk of hours to the game Diablo. Modern things like Farmville seem especially pernicious. [This is not to be construed as all gaming is bad, etc.]
I suggest reading a translation.
Why are we thinking about this again?
It seems to me these are obvious targets for regulation. I’d guess the OP is worried that we’ve overlooked something. The game theory of it might make it difficult to implement in practice: e.g. if one country bans casinos that just makes casinos more profitable for the nearby ones. … but that’s what treaties are for.
Your question makes me think of what economists call negative externalities. Wikipedia has a list of them
I have observed different color temperatures in my left or right eyes some times and observed that these can be changed after wearing red/blue glasses; by swapping which lens covered which eye, I could correct them both back to a more balanced condition.
I use a subset of the extensions you mentioned. I also use this bookmarklet to hide nested comments in long threaded lesswrong pages like the open thread; then I open only the interesting threads selectively to limit distractions.
I think it was clear and good.
A new study in mice (popular article) establishes that elevated levels of fatty tissue cause cognitive deficits in mice with potential significance for humans suffering from obesity or diabetes. They hypothesize that the mechanism of action involves the inflammatory cytokine interleukin 1 beta. Interventions that restored cognitive function included exercise, liposuction, and intra-hippocampal delivery of IL1 receptor antagonist (IL1ra).
You may find better ideas under the phrase “stochastic optimization,” but it’s a pretty big field. My naive suggestion (not knowing the particulars of your problem) would be to do a stochastic version of Newton’s algorithm. I.e. (1) sample some points (x,y) in the region around your current guess (with enough spread around it to get a slope and curvature estimate). Fit a locally weighted quadratic regression through the data. Subtract some constant times the identity matrix from the estimated Hessian to regularize it; you can choose the constant (just) big enough to enforce that the move won’t exceed some maximum step size. Set your current guess to the maximizer of the regularized quadratic. Repeat re-using old data if convenient.
As a counterargument to my previous post, if anyone wants an exposition of the likelihood principle, here is reasonably neutral presentation by Birnbaum 1962. For coherence and Bayesianism see Lindley 1990.
Edited to add: As Lindley points out (section 2.6), the consideration of the adequacy of a small model can be tested in a Bayesian way through consideration of a larger model, which includes the smaller. Fair enough. But is the process of starting with a small model, thinking, and then considering, possibly, a succession of larger models, some of which reject the smaller one and some of which do not, actually a process that is true to the likelihood principle? I don’t think so.
To be a Bayesian in the purest sense is very demanding. One need not only articulate a basic model for the structure of the data and the distribution of the errors around that data (as in a regression model), but all your further uncertainty about each of those parts. If you have some sliver of doubt that maybe the errors have a slight serial correlation, that has to be expressed as a part of your prior before you look at any data. If you think that maybe the model for the structure might not be a line, but might be better expressed as an ordinary differential equation with a somewhat exotic expression for dy/dx then that had better be built in with appropriate prior mass too. And you’d better not do this just for the 3 or 4 leading possible modifications, but for every one that you assign prior mass to, and don’t forget uncertainty about that uncertainty, up the hierarchy. Only then can the posterior computation, which is now rather computationally demanding, compute your true posterior.
Since this is so difficult, practitioners often fall short somewhere. Maybe they compute the posterior from the simple form of their prior, then build in one complication and compute a posterior for that and compare and, if these two look similar enough, conclude that building in more complications is unnecessary. Or maybe… gasp… they look at residuals. Such behavior is often going to be a violation of the (full) likelihood principle b/c the principle demands that the probability densities all be laid out explicitly and that we only obtain information from ratios of those.
So pragmatic Bayesians will still look at the residuals Box 1980.
It’s easy to be sympathetic with these two scenarios—I get frustrated with myself, often enough. Would it be helpful to discuss an example of what your thoughts are before a social interaction or in one of the feedback loops? I’m not really sure how I’d be able to help, though… Maybe your thoughts are thoughts like anyone would have: “shoot! I shouldn’t have said it that way, now they’ll think...” but with more extreme emotions. If so, my (naive) suggestion would be something like meditation toward the goal of being able to observe that you are having a certain thought/reaction but not identify with it.
naive question (if you don’t mind): What sort of things trigger your self-deprecating feelings, or are they spontaneous? E.g. can you avoid them or change circumstances a bit to mitigate them?
Humans vote as if they are making declarations of support in a public arena.
Interesting. Can you point me to an example of something surprising that’s predicted by this interpretation? I’m a little confused, though, because for many people they’re very public about how they voted anyway (it seems unlikely they’re lying), so it is effectively public, no?
An interesting take is to have a game where programming is an integral part of solving the puzzles.