I’m a freak about tools and symmetries. I suffer from chronic achronic dysfunction. Usually displaced a few years into the future? What is now or then? Probably on the spectrum, too, but why know where? I’d actually prefer to discover I am less right because that’s where the biggest learning opportunities are, but at the same time deep learning becomes more difficult with age. Latest recent self modifications? Homo sapiens’ mobile larynx and DNA as recipe book, not blueprint.
shanen
Okay and you’re welcome, though I wish I had understood that part of the discussion more clearly. Can I blame it on the ambiguity of second-person references where many people are involved? (An advantage of the Japanese language in minimizing pronoun usage?)
Is your [Dogon’s] reference to “your model” a reference to ‘my [shanen’s] preferred financial model’ (obliquely referenced in the original question) or a reference to Vladimir_Nesov’s comment?
In the first case, my “preferred financial model” would involve cost recovery for services shared. An interesting example came up earlier in this discussion in relation to recognizing consistency in comments. One solution approach could involve sentiment analysis. In brief, if you change your sentiment back and forth as regards some topic, then that would indicate negative “consistency”, whereas if your sentiment towards the same topic is unchanged, then it indicates positive consistency. (If your sentiment changes rarely, then it indicates learning?) So in the context of my preferred (or fantasy) financial model, the question becomes “Are enough people willing to pay for that feature?”
Now things get more complicated and interesting in this case, because there are several ways to implement the feature in question. My hypothesis is that the solution would use a deep neural network trained to recognize sentiments. The tricky part is whether we yet know how to create such a neural network that can take a specific topic as an input. As far as I know, right now such a neural network needs to be trained for a specific domain, and the domain has to be narrowly defined. But for the sake of linking it to my financial model, I’m going to risk extending the hypothesis that way.
Now we get to an interesting branch point in the implementation of this feature for measuring consistency. Where do we do the calculations? As my financial model works, it would depend on which approach the users of the feature wanted to donate money for. I’m going to split it into three projects that could be funded:
Developing the deep neural network to analyze sentiments towards input topics. This is basically a prerequisite project and unless enough people are willing to fund this project the feature is DOA.
Analyzing the data with the neural network on the LW (LessWrong) side. In this version of consistency measurement there would be a lot of calculation on the LW side testing sentiments against topics, so there would be both a development project and a significant ongoing cost project. Both parts of this double project would need sufficient donor support to use this approach.
Analyzing the data with the neural network on the users’ side. In this version of consistency measurement, the tedious calculations could be removed from LW’s servers. The trained neural network would be downloaded and each person would calculate (and optionally share) the consistency metric using the data of that person’s own comments. The cost of the development project should be similar, but there wouldn’t need to be donors for a major ongoing cost project. (I would actually favor this version and be more likely to donate money for this version due to privacy considerations.)
(If there are enough donors, then both 2 and 3 could be supported. However, deciding which one to implement first could be determined by which project proposal attracts enough donors first.)
In the second case, I’m afraid I don’t understand what part of Vladimir_Nesov’s comment was about a “model”. And you weren’t talking to me, anyway. And I should also apologize for my longish and misdirected response?
Well, judging by the negative karma it has given me, it seems clear that there is some “problem with the question” as LessWrong voters see things. Good thing that I don’t much care about karma, eh?
However, mostly I take it as evidence against the one-dimensional approach to measuring karma. Rather than take it as evidence that LessWrong itself should have lower karma, I see it as evidence that the shallow and close-minded Reddit approach of thumbs up or down is flawed. Even though the code has been rewritten, I think LessWrong might be “fighting the last war” on that front. Now back to the actual issue at hand...
Is pandemic insurance a good idea?
To address the substance of your [ChristianKI’s] comment [But did you also give me a thumb down?], we will always be in a position of “fighting the last war” after any surprise attack. That’s actually why I decided that Question (0) was the best one to focus on, since that generalization is (hopefully) applying a lesson learned from Covid-19 to develop a good policy for businesses going forward. (I also considered taking the approach that the pandemic was not a surprise, per John Oliver’s main story on the latest episode of Last Week Tonight. He included some historical background and I agree that we should not have been so surprised by the attack of Covid-19. (I even believe that we are only a couple of unlucky mutations away from being back to square one in the “last war” against Covid-19.)
So let me expand the topic into two “concrete models” of insurance, no-fault auto insurance and health insurance.
In the case of no-fault auto insurance, we basically take the position that there are going to be some automobile accidents that cause significant damages. We could spend a lot of time haggling about who is at fault and how much they should pay, but (at least in America) it was decided to simplify the situation by requiring all car owners to have liability insurance for their cars. Personal example time: I was involved in a car accident. No question but that it was the other driver’s fault. I even knew that I could sue him and the average settlement for such cases was twice the value of his insurance coverage. Was I a fool for accepting the insurance company’s offer of his full covered value? Maybe I was a fool, but I didn’t see any reason to damage his life for more money. (Plus I believed that the accident was at least partly his wife’s fault, so that suing him would have put stress on their marriage.)
I started with the auto insurance because that’s a more conventional case. Most of the people who buy the insurance don’t suffer the damages and the claims are paid from the premiums of the ‘lucky’ people who have no accidents. However medical insurance is different because sooner or later everyone is going to have major medical costs. But (again limited to America) the solution has been based on private insurance. There are two tricks involved in this version of the solution. One is the “sooner or later” part which justifies making young people pay sooner for the medical costs they will incur later. The other is the variable costs, especially at the end. I am not endorsing this position (and have major reservations about medical insurance in general), but some people die much more cheaply than other people do. End of life care can be quite expensive but many people are willing to be “reasonable” about it, at least when they are in good health and preparing a Living Will for their future self in terminal condition. (And now you see where the “death panels” come from?)
Prediction time: We will have new diseases and new pandemics in the future. I actually think that Covid-19 was an unlucky zoonotic accident, but the next accident could be much worse. Or the next pandemic might not be an accident. Maybe Betteridge was wrong and the answer to this headline question should have been “Yes”?
One more thing. Might even be related to the mysterious karma thing. Maybe the negative karma was voted in fear of politics? However, I would say that I want to avoid political responses to medical crises and Covid-19 is a MEDICAL crisis that has also triggered a secondary economic crisis. My focus was on dividing the economic crisis away from the medical crisis, where insurance is the “normal” solution to deal with economic crises. However Covid-19 could be (and probably has been) linked to politics via the money.
Meta again...
Now should I say “oops” about my karma? No, thank you. I would be willing to discuss that with the people who felt so strongly (and negatively) about the topic. Or maybe they just felt slightly negative and LessWrong encourages expressing such negative feelings? Or maybe I am merely slightly curious in whether I should care about the opinions of the (to-me-anonymous) people who felt the question deserved a negative rating?
Is there something substantively wrong with the topic? Four thumbs down (currently) say “Bad”, but I say “Boo” (and not as in “boo hoo”). I think it would be nice if they had been encouraged (or even required) to offer a few words about why they dislike the question so much. (How about giving less weight to negative karma if it is not actually justified with an explanation? Or even let downvoters pick from a menu of reasons to give full weight to their thumbs?)
It’s hard to change or improve something without measuring it. I think you are describing a fairly complicated concept, but it might be possible to break it down into dimensions that are easier to assess. For example, if some of the assessments are related to specific comments or replies [our primary “actions” within LessWrong], then we could see what we are doing that affects various aspects of our “amiability”.
This demonstration of “Personality Insights” might help illustrate what I’m talking about. If you want to test it, I recommend clicking on the “Body of Text” tab and pasting in some of your writing. Then click on the “Analyze” button to get a display for some of the primary dimensions. If you then click on the “Sunburst visualization” link at the bottom, you’ll see more dimensions and how they are grouped. I think your notion of “amiability” may be within the cluster of “Agreeableness” dimensions.
Another way to think of it is related to the profiles that Facebook and the google have compiled for each of us. My understanding (from oldish reports) is that they are dealing with hundreds of dimensions. I would actually like to see my own profile and the data that created it. I might even disagree with some of the evaluations, but right now those evaluations are being used (and abused) without my knowledge.
Is this another karma-related topic? Your tags suggest otherwise, but I would like to see some of these dimensions as part of the karma metric, both for myself and for other people. Most of the examples you cite seem to be natural binary dimensions, but not fully orthogonal. Not sure what I should say here, but I’ll link to my longest comment on Less wrong on the topic of enhanced karma. As you are approaching the topic, such an approach would help me recognize “amiable” people and understand what makes them amiable. I doubt that becoming more amiable is one of my goals, but at least I could reflect on why not. Or perhaps most importantly I could look for the dimensions that reflect sincere amiability to filter against the fake amiability of the “charming sociopaths” you mentioned.
I feel like this branch of the discussion might be related to Dunbar’s Number? Either for total members or for active participants. Is there any data for number of participants over time and system versions?
However I also feel like Dunbar’s Number is probably different for different people. Social hubs have large numbers of personal friends, whereas I feel overwhelmed by any group of 150. My personal Dunbar’s Number might be around 15?
This topic of karma in general interests me, per my reaction to the karma project from 2019. However my question in response to this “site meta” item is: “Is there a karma explorer?” One side would be a way to see the basis of my own karma, but I would also like a way to understand the basis of the karma of other users. For example, I see that the author habryka has over 13,000 points of karma here and 242 points of karma somewhere else, but what does that actually mean? Does any of that karma represent reasons I should read comments from habryka with greater attention? (Right now it feels like there are a lot of magic numbers involved in karma calculations?)
More fuzzy reaction, but I feel like whatever forms the basis of karma, it should age over time. Recent contributions to karma should matter more than old ones.
Thanks for the lead to the “Site Meta” tag. I have that one open in another tab and will explore it next. However my general response to your reply is that part of the problem is that I would like to see different kinds of “tracking summaries” depending on what kinds of things I am trying to understand at a particular time.
You introduced a new example with your mention of “meetup announcements”. If you are trying to track your activity on LW in terms of such meetings, then you want to see things from that perspective.
What I have done in today’s experiment is to open all the “recent” notifications in tabs because it is not clear which ones are actually new… It would be helpful if the notifications pulldown list also showed the notification times (though the mouseover trick for date expansion also works for the relative dates on the floating summary that appears to the left of the notification when you hover over it). Overall I’m still having a difficult time grasping the status of this question.
Accuracy is relatively easy to assess. If you think someone is saying something that is false and you are reacting to the comment on that basis, then you should be able to cite appropriate evidence to that effect. (But the other person should be able to object to your evidence as part of a ‘proper’ MEPR system.)
I actually think most dimensions of the reputation system should be normalized around zero, so that if people tend to give more negative reactions, then the system should be adjusted to make it more difficult to give a negative reaction, such as saying a comment is inaccurate. (However I also think that should be weighted by the MEPR of the person making the rating. If someone has a established a long track record of catching inaccuracies, then the likelihood is higher for that person.)
I agree that consistency is much trickier. Even in the case where I know the person has changed his mind on a topic, I would not regard it as inconsistent if there was good reason for that change. I think I might like computer support for something like that. How about a triggered search? “Show me this person’s comments about <target keyword>” and I could then look over the results to see if they are unchanged, evolving over time, or jumping back and forth.
But actually that is something I would like to apply to my own comments over time. I think I am fairly consistent, but perhaps I am deluding myself?
Just rereading the entire “question” to try to assess it, and almost overlooked your [Viliam’s] helpful numbered list. I think I have replied as appropriate (if replying was appropriate?) and hope that the notification system will let me know if I should come back.
On the basis of your encouragement, I’m going to try to write something for the literacy software topic. Not sure upon what basis you think it might be “great”, but I could not find much that seemed to be related in my search efforts on LW. The obvious searches did produce some results, but how they are ranked is still unclear. For example, I remember a “literacy” search with four primary results, but two of them were for narrow senses of literacy such as “financial literacy”. Before starting to write, I’m going to try searching from the list of tags. (It would be helpful if there were an option to sort by the numbers there… That way I could spot the more relevant tags more easily. (I’m guessing that the numbers are the authors’ usage counts for the tags, but there should be a way to link to the readers’ counts to capture the other side of interest? (What people want to read about in contrast to what people want to write about. (Yet another symmetry thing?))))
Backing up to the top level, I haven’t obtained much insight into the original question. I guess my summary of my understanding now would be “We’re sort of above worrying about money, so go have fun with the LW tools we are creating.” I think that summary reflects input from at least two of the creators of the tools. The users’ side seems to be “We’re having fun and that’s why we do it.”
Thank you for the reply, and I am also somewhat aware of karma. It does seem useful, but not in a searchable way. Per my suggestion for extended karma (one of my first efforts on LW), I wish that karma (in a multidimensional form) were usable for self-improvement, for filtering and prioritizing, and even for searching for people who are likely to write things worth reading.
I guess one helpful step would be if karma was included in the flyover display. Right now the “ChristianKI” flyover only reveals 4 dimensions of your identity: Your identity’s age (joined date), # of sequences, # of posts, and # of comments. That gives me some idea of your activities, but isn’t as helpful (in my imagination) as a radar icon showing that you are above average on consistency and accuracy and perhaps below average in some other dimensions.
Thank you for another deep and thoughtful response. But what response should I make? [Note that second person “you” here refers to Viliam, but there is risk of confusion if I say something to the broader (but unknown) audience. I’ll try to be careful… But in this discussion I am sure that I have already used “you” with reference to someone else. [I find myself wishing that English had a mechanism to avoid confusing “you” references without ponderous third person descriptions such as “Viliam in his comment of <timestamp> said...”]]
The easy part is to pick a couple of nits, but I’m trying to get deeper than that… But when I back up (and look at the context) then the volume becomes overwhelming and I’m having trouble unraveling the topics. I do feel that part of the problem is my poor and unclear writing, but it is also true that I don’t understand how to use the system well.
So I’m going to focus on two nits here, one that reflects my lack of understanding of the system and one that reflects the lack of clarity in my writing. Then I’ll try to get back up to a higher perspective, which seems to be the karma thing… (But that topic is more related to my earlier reply on the karma “research” from the end of 2019.)
At the end of your comment, what you described is an interesting example of my lack of understanding of the LW system. Or maybe an example of my failing eyesight? I definitely knew that it worked exactly the way you described it for “top-level” content, but for several days I was apparently unable to see the fifth icon on the context menu when I was working on a reply (such as this one). But this is just part of a more general lack of familiarity with the system. Another example: A few minutes ago I spent several minutes figuring out that a “5m” notation meant 5 minutes ago, not 5 months ago, even though the article had an “11y” notation for the 11 years from 2010. The section heading of “Recent Discussion” should have made it more obvious to me, but now I wonder what the notation for 5 months ago would have been… (Relative times are good, but sometimes confusing.)
The other nit involves my poor clarity. I was already quite aware of the “this” link you posted to my user page and it does list my contributions, but not in the sense of “track” that I was trying to describe. There are also the pull-down notices invoked by the bell icon at the upper right. What I am currently unable to do is combine these views to get a mental image of what is happening. Where do my own comments fit into the discussion? What is the structure of the replies?
Is there a tree graphic representation of the discussions hidden somewhere around here? I’m imagining a node diagram with one color for my own contributions, separate colors for each of the primary contributors, and then a fallback color for grouping all of the minor contributors. Now I’m imagining solid lines for direct replies and dotted lines for links that go elsewhere. (If the 80-20 rule applies to discussions here, then at least the part with colors for contributors might work well enough with a reasonably small number of colors.)
For whatever it is worth, I feel like this discussion itself is already beyond my ken. I feel like the lesson that I am learning is that I need to learn to limit my questions MUCH more narrowly. (I have only looked at a few relies, and my available time is already becoming exhausted by this one reply. But was this the best place to begin today? (And now I lack time (and musal energy) to return to the karma topic.))
Also I greatly appreciate the politeness of the replies and I feel like I am being indulged in my ignorance. In solution terms, how could I learn about the system without bothering other people? (Or is that intrinsically impossible in the context of a discussion system such as this?)
Again, thank you for your thoughtful reply. I feel like I’m trying to use a depth-first response strategy and it’s making it harder for me to see what is really going on.
I think the most interesting problem raised in your response is the integration problem. If people are just contributing their thoughts because they want to, then they don’t really have much incentive to do the hard work of integrating their thoughts into the thoughts of other people. If Wikipedia is able to accomplish that kind of integration to a fairly high degree, I think it is due to their guiding principles, and right now I don’t understand the principles of LW. I can definitely say (based on many years of professional work) that it’s hard work and I was well paid for my efforts in making technical papers (up to dissertations) more cohesive and integrated with previous research results.
My newer view is that LW is almost like a form of performance art, with the contributors in the role of artists.
What could LW do to encourage more integration of the content? I see it largely as a search and editing problem. Projecting again? At least I think it would be nice if LW was looking at what I am writing and searching for related content, perhaps showing candidates over in that empty space on the right side (of my biggest display). Then the editing problem would be supporting me in integrating my new content into the older content so that I could help extend or clarify that material.
But why would I make the effort? Obviously when I was paid to do that kind of thing, the answer was easy. Because I was doing it for money. Trickier to justify now. I think I’m mostly writing because it helps me clarify my own thinking about things. I also feel a sort of “teacher’s satisfaction” when I feel like I’ve “enlightened” someone. It would be nice if the system (LW in this example) offered me some way to track my contributions. I might even feel like I’d accomplished something if I found I had written 30,000 words last year. (Actually, I am tracking my writing, but without caring enough to run the totals. But I’d estimate at least 200,000 words/year. Probably less than 400,000?)
But there are many reasons for writing. I think some people write in hopes of getting famous. That may be linked to hopes of earning money, or even striking it rich with a bestselling book. Some people seem to write in search of attention or recognition. Then there are the trolls, some of whom seem to write to annoy people and get negative attention. (Why? Such motives are evidently beyond my ken?)
Anyway, I’ve wandered off again. My original intention in posing this question was rather different. I’m trying to figure out what sort of things I can ‘properly’ write about here on LW. My interests are pretty broad. I do feel like AI is a heavy concern, even a favored topic, here on LW, and that is probably related to the preferences of the donors or maybe the personal concerns of the “team of six” (artists?) who do the “site development and support”.
Maybe it would be good to try a list of topics and ask which are and are not appropriate for LW? For the appropriate topics, someone could help me figure out where they belong? Or even better if they have already been discussed exhaustively and I can just learn how to find those discussions? So here’s a short list of a few of the things on my mind these days:
Literacy development software (mostly for kids but with adult options (including multilingual))
Personal reputation systems (mostly to become a better person by understanding how other people evaluate my reputation (and I already wrote a bit about this on LW) but also to recognize (and filter) ‘untrustworthy’ sources of dubious information) [Should I link to that comment? But evidently anchored links are not supported here in replies?]
Time-based economics (which I tag “ekronomics” (and which is broadly related to this selfsame discussion))
New products (like smartphone hats and modular smart chairs and multi-mode super-bikes and timing-based continuous BP monitors (without pressure cuffs) and a Pokemon Chair app and...)
AI (but mostly I feel like it’s a pointless topic, since the answers are intuitively obvious to the most casual observer (such as the late and great Iain M Banks)) ;-)
Political reform (with some radical thoughts like no-loser guaranteed-representation elections with logarithmic weighting and additional dimensions of for new political checks and balances)
Now I feel like I’m wandering way too much, but I hope some parts of it were of some interest to someone. Right now I’m mostly just trying to figure out where my ideas fit on LW. If they fit anywhere? I just started with the ‘influence of money’ aspect, probably because I feel like I should pay for any value received and I hope to receive value from LW. (And of course payments don’t need to be monetary or even evaluated (with shoehorns) based on monetary equivalents.)
Again, thanks for your replies, though I’m still not sure what to make of them.
On the one hand, I agree that independence is a good thing (even though I may sometimes disagree with some people’s independent decisions). On the other hand, I have deep reservations about charities that in a sense allow governments to evade their appropriate responsibilities to the citizens of their nations. Especially in the case of serious problems, it shouldn’t be a matter of luck (if the victim stumbles across a helpful charity) or willingness and ability to actively beg for help. (Food as an obvious example. Some people prefer to starve to death before begging.) On the third hand, I think there are multiple constituencies here (within LW) and each person and each group of people have different priorities and objectives, etc.
Several more hands, but let me try a few exploratory questions instead. Which “constituency” do I belong to (from your LW team perspective)? How should I properly express support for or concern about “developments” (on LW)?
BtW, I think I like the leisurely atmosphere of LW. However I may be projecting due to my recent externally forced shifts of priorities (which are also obliging me to give LW a relatively low priority). But on the fourth hand I am also having trouble figuring out what material on LW is still relevant even though it is old. LW kind of feels like a virtual book in the process of formation, with various chapters in various states of completion… (The longest chunk of my career was technical editing for a TLC, but the research lab didn’t publish many books. Some chapters and dissertations came across my desk from time to time, but mostly just conference papers and HR stuff.)
Interesting reply, and again I thank you for your thoughts. Still not seeing how “politics” figures in. I’m not trying to provoke any emotional reactions. (Nor do I perceive myself as having any strong emotional reactions to anything I’ve seen on LW so far.)
The part about your BBS especially hits a nerve. I created and operated a BBS in my youth. I did include a financial model in the design of my BBS, but my primary motivation at the time was to create a real cost for abuse of the BBS and secondarily to recover some of the costs. (Dedicated hardware and an extra phone line (I think).) I did not include my programming time as a cost because I mostly regarded that as a learning experience that was also improving my own market value as a programmer. Looking back, I actually think the deficiencies in my financial model greatly limited the success of the system, and if I had done it again, then I would have changed the priorities so that the funding model of the BBS put priority on the main objectives of the users. I even see how I could have arranged the model to align my personal philosophy more closely to the users’ objectives. (But I don’t have a time machine to go back and fix it now and I got busy with other stuff for many years after that...)
I also sympathize (?) or partially concur with the idea of keeping things small and self-contained. However I also see that as part of the financial model. I think the Diaspora fiasco on Kickstarter is a good example of how such things can go wrong. If they had just gotten the first increment of money and started by implementing the kernel server, then maybe the project could have succeeded step by step. Instead, the project hit the jackpot, and they tried to refactor and redesign for the grand new budget, and things mostly went bad after that.
Another relevant example I could use would be Slashdot, though I don’t know how many of the people on LW are familiar with it. My perception is that the rolling ownership indicates a portable nuisance status, though the nuisance status may be some form of non-pressing debt rather than anything that threatens the existence of the website. Whatever the cause, it seems that Slashdot lacks the resources to fix even the oldest and best-known limitations of the system. (In particular, the moderation system of Slashdot would seem to need some adjustments.)
Hmm… I feel like my use of examples is diverging from the guidelines’ intended meaning for “concrete models”.
Thank you for your reply. I looked at your link, but I am not clear about the relation of “politics” to my question as currently constrained. (Right now I see no reason to extend it in that direction unless the financial model is related to politics. I have so far seen no evidence to that effect. Maybe you could clarify how you see the relationship?)
I was trying to avoid expressing my opinions or suggestions, though if I didn’t see the world (or some aspect of the world) as potentially different, maybe even better, then I would deny that there is any problem to be considered. A problem without a solution is not really a problem, but just part of the way things are and we have to live with it. To pretend that I have no opinion or perspective would be quite misleading.
Or I could remap it to the word “question” itself? If no answer exists, then where (or why) was the question?
Perhaps you could clarify what you mean by “question” in the context of a question that is suitable input for the “New Question” prompt? Would that be a better way to approach it?
Looking (yet again) at the “Default comment guidelines”, the explanation for my phrasing of the question was because my initial reading of LW seemed to indicate that money is not supposed to influence the discussions and I am skeptical of that. I am asking for clarification, but that may be a request to be persuaded LW has a viable financial model? My previous reply included a more concrete example. As a prediction? Hmm… I guess there must be some topics which are not suitable for discussion on LW and therefore I could predict that some of them may be unsuitable for reasons related to the financial models? I still don’t see anything that I disagree with and I am already curious about what y’all are thinking (but that is part of my general theory of communication as a two-way process).
Say oops? Not yet, but it happens all the time. I hope I change my mind frequently as I learn new things, but I also try to minimize logical contradictions. I am usually trying to extend my mental frameworks so that apparent contradictions can be resolved or diverted. (I’ve gotten old enough that I think most of my positions have substantial data underlying them.)
I can easily apologize for my tangents. I do tend to wander. However I can also easily blame my zen collapse from some years back. It used to be a 6-degrees-of-Kevin-Bacon world, but now things too often seem to me to be only one or even zero degrees separated. It’s the same thing when you look at it that way?
Not easy to figure out how to fix my question. And if I figured out how to improve it, then I’m not even sure I should fix it or just stay here in the comments, though I see it is possible to edit the original question.
So… Another way to word the question along your lines could be “How are the (visible) conversational flows affected by the (less visible) money flows?” (If I do modify the original, does it preserve old versions to clarify replies that will then seem out of context?)
Take the google as an example for the main topic? The google started with one set of goals and even had the motto of “Don’t be evil.” Then the money started flowing and the business mutated. I actually think the google’s de-facto motto these days is “All your attentions is belong to us [so we can sell your eyeballs to the paying advertisers].” But there is a fundamental inconsistency there. Advertisers do not want to pay for the most critical thinkers reasoning based on the best data. Advertisers want obedient consumers who will obey the ads, whether they are selling deodorant or stinky politicians. (In another (still simplified) perspective on that subtopic of advertising (bridging to money), the costs are extremely high for the final increments of quality to produce the best products that can then be advertised as being the best. In contrast, the costs are much lower for advertising that portrays legally adequate products as the best.)
I think it would be good if the article mentioned data sources, but perhaps I’m projecting since I have a lot of experience with them. Right now I’m using three devices to assess my sleep. One is motion based and the quality of the data is limited. The other two are wristbands that combine pulse with arm motion to automatically detect and record sleep. Both of them divide sleep into deep, shallow, and REM, but they disagree quite a bit on the actual details when they measure my sleep. (And I wear both of them on the same wrist, too.) If there is interest I can provide more details (including some impressions from older activity monitors).
As I thought about my own data going back some years, I was reminded that age is an important factor, but it doesn’t appear to be mentioned in the article. I actually think I may be sleeping pretty well after adjusting for that factor.
Not even sure this comment is directed at me, but quite sure my reply is quite late. (In terms of deciding to reply, it would be helpful if LW revealed something about your recent activity in the flyover.)
At this point I don’t recall the books in sufficient detail to address your question properly. I do fit them into the general scheme of compulsive behavior. My general take on habitual behaviors (including compulsions) is that certain parts of the lower brain are the mechanical keys to the compulsions, but there’s a scale before things get into extremes like OCD. The complicated part is that there are many paths into the lower brain. On that foundation, my basic theory is that some people are more subject to compulsive behaviors (which can be roughly mapped to having less willpower), but the trigger for a compulsive behavior is a point of attack to change that behavior. Some triggers are definitely worse than others, so switching to a less troublesome trigger is an improvement.
Supplemental reading? I am a Strange Loop by Hofstadter is relevant, though my interpretation is different from his. I think all of us run various mental programs, and his recursive loops are only a relatively minor subset. (But I think his Godel, Escher, Bach is still a must read, especially the chapter on translation.) Quite recently I read Descartes’ Error by Damasio, which is relevant. He’s approaching these problems from a more mechanical level, but with heavy consideration of how emotions are involved in decision making. Also the books from Malcolm Gladwell and Dan Ariely are excellent.
No, what you are saying is NOT related to what I am advocating. Even worse, I am having serious trouble trying to reconstruct a logical chain whereby you could have gotten there based on what I was trying to say. I know I write badly, but still...
Your departure point seems to be that government-mandated insurance is bad, but even there I am not convinced my examples were inappropriate. Rather I feel as though you are flying off in a completely different direction.
So let me try to take it from the top again. First of all, my basic premise is simply that insurance is a “normal mechanism” for responding to risk.
Now there are many kinds of risks and many kinds of insurance. Most insurance policies are based on predicting the future. I think you want to limit the examples to purely voluntary insurance, which doesn’t bother me, but the key to selling voluntary insurance is to convince the customer that the probability of the adverse event is high enough that some money should be diverted from present expenses to protect against that event. As a company would see things, that basically means deciding whether to spend less on maintaining and expanding the current business operations so as to protect future business operations. (And yes, I will even acknowledge that the government will naturally be pressured into regulating the insurance industry for several reasons. The most important one is that insurance companies have perverse incentives to exaggerate or even lie about the risks in order to sell more insurance.)
However Covid-19 is an example of an unexpected and essentially non-insurable disaster. Even worse, it affects almost every company more or less adversely. Such disasters create overwhelming pressures for governments to respond to protect the lives and welfare of their citizens. My suggestion is not that we normalize disasters, but we try to normalize the responses with TIDI (Time-Inverted Disaster Insurance) to reflect cases where the government has been forced to act as insurer of the last resort.
The situation of TIDI is fundamentally different from regular insurance, but in some ways advantageous. Looking at the damages after the fact, we can actually get a clear handle on how much economic damage has taken place. From that perspective, the insurance companies are using TIDI to help total up the damages and apportion the economic responses according to the actual damages suffered.
The main “trick” in TIDI is that the premiums would be paid after the fact, after we’ve seen how bad the disaster was. And from that perspective, the main problem is making sure the government as customer pays the premiums within a reasonable time frame. That may be the fatal flaw, however. Governments are notoriously bad about sticking to their payment schedules.
Maybe you think I’m trying to defend the notion of “too big to fail”? If so, then no. Rather I actually think that one of the most legitimate responsibilities of government is to prevent corporations from becoming too big. Every company should be free to go bankrupt at any time, but without taking the rest of the economy down with it. But yes, the entire economic system as a whole does need to be protected from collapse, and that’s another legitimate responsibility of government.
Returning to the specific case of Covid-19, I think mixing up the costs of the damages with the costs of the medical responses has made the situation much worse. An especially noteworthy bad example in Japan was a series of GoTo campaigns that were supposed to help businesses, but which actually wound up helping the coronavirus more.