Physicist and dabbler in writing fantasy/science fiction.
Ben
Nice post. Gets at something real.
My feeling is that a lot of contrarians get “pulled into” a more contrarian view. I have noticed myself in discussions propose a (specific, technical point correcting a detail of a particular model). Then, when I talk to people about it I feel like they are trying to pull me towards the simpler position (all those idiots are wrong, its completely different from that). This happens with things like “ah, so you mean...”, which is very direct. But also through a much more subtle process, where I talk to many people, and most of them go away thinking “Ok, specific technical correction on a topic I don’t care about that much.” and most of them never talk or think about it again. But the people who get the exaggerated idea are more likely to remember.
Agreed. Quite aside from questions about whether the govt should be subsiding university education (I think it should) it is clear that retroactive cancelation of debt is the wrong way to do it.
Price controls or subsidies going forwards (for new students) would have a better long term impact, and would also help poorer students more. Right now there are probably people out there who chose not to get a degree, or to get a different one, because they were worried about the debt. We can assume those people mostly were poor, and probably still are. They are the real victims of debt relief landing out of the blue. Imagine, that rich kid who took your place when you couldn’t afford uni now get a government bailout. Rich getting richer. Making the actual upfront price lower helps the next generation of these people more than it helps anyone else.
I think you are missing an important one. I am not sure what the title would be, but the ideology says something like: “The students chose which courses to take and loans to pay for them. The debt is to some extent self inflicted.”
I think this makes student debt very different from, for example, medical debt. Where medical debt can land unexpectedly and be unavoidable for the unlucky, student debt is taken on voluntarily.
Half a joke, but arguably debt relief should only be available to those willing to forfeit their degrees. (A sort of “We’ll return the money if you return the goods” argument). If there was a mechanism by which someone’s degree could actually be taken off them in a meaningful way this would actually be quite interesting, as it would further incentivise university’s to make sure their courses were less expensive and better value. I am now trying to imagine a world where any student at any time can go to their old university, and demand a (partial?) refund in exchange for giving back the degree. There is some decaying system, where returning the degree the day after graduation gets close to 100% refund but giving it back 10 years later gives much less.
I like this framework.
Often when thinking about a fictional setting (reading a book, or worldbuilding) there will be aspects that stand out as not feeling like they make sense [1]. I think you have a good point that extrapolating out a lot of trends might give you something that at first glance seems like a good prediction, but if you tried to write that world as a setting, without any reference to how it got there, just writing it how you think it ends up, then the weirdness jumps out.
[1] eg. In Dune lasers and shields have an interaction that produces an unpredictably large nuclear explosion. To which the setting posits the equilibrium “no one uses lasers, it could set off an explosion”. With only the facts we are given, and that fact that the setting is swarming with honourless killers and martyrdom-loving religious warriors, it seems like an implausible equilibrium. Obviously it could be explained with further details.
This makes a lot of sense actually.
If an investor wants to invest in a couple of different markets then that investor can choose some companies in those markets and buy shares. A single company in two non-synergistic markets is just silly from this perspective, if it was instead two companies investors would be more able to specifically choose which part they like.
If the company doesn’t have shares (privately held) then this incentive does not exist.
In “Capitalism and Freedom” Milton Freidman has a section about diverse companies. The book is probably dated but some aspect of the 1960′s American tax system meant that if a company you owned shares in paid you a dividend you paid tax on it, but if they re-invested a profit that was not taxed. This meant that, if (cartoon example) 100% of the shareholders in Company A wanted to take their dividends and invest them in a start-up that does X, the same thing can be done more tax efficiently by Company A instead not paying any dividends and using the money to set up a new arm that does X (the new arm essentially being a separate startup company in all but name).
Freidman thought this was a reasonably serious distortion on markets and that investors should have to pay tax on re-invested profits at the same rate as dividends, to correct for it.
Perhaps some aspect of tax systems in some countries is having a similar effect, that taking some money out of my lawnmower company to invest in a chip foundry would incur more taxes than expanding the company to do both lawnmowers and computer chips.
I read an article about McDonalds a while ago. One of the things that powered their early success and growth was an extremely small menu. The slim menu meant that the kitchen had little variety enabling everything to be done faster and cheaper.
(https://en.wikipedia.org/wiki/History_of_McDonald%27s)
A giant company with 50 products could plausibly have 50 teams, each of which is focussed on making its product the best it can be. So the “focus” doesn’t have to be lost at scale, its more like a bunch of different organisations under one umbrella.
One issue is going to be filtering.
Strife and conflict is memorable. So you are searching for the least noteworthy examples, the ones that people are least likely to comment on or remember.
I don’t know what qualifies as a “community” really. At work I have seen uncontroversial changes come in a few times.
You are right.
I thought the whole idea with the naming was that the convention whereby “twelve is written 12” the symbol at the end “2″ is the one symbolising the littlest bit, so I thought it was called “little endian” for that reason.
I now I have a lot of questions about how the names were chosen (to wikipedia!). It seems really backwards.
How does a little endian do a decimal point? Do they put the fractional part of the number at the beginning (before the decimal) and the integer part afterwards? Eg. 123.456 becomes 654.321? So just like all integers in big-endian notation can be imaged to have a trailing ”.0″ they can all be imagined to have a leading “0.” in little-endian?
The way we do it currently has the nice feature that the powers of 10 keep going in the same direction (smaller) through a decimal point. To maintain this feature a little-endian requires that everything before the decimal point is the sub-integer component. Which has the feature lsusr doesn’t like that if we are reading character by character the decimal forces us to re-interpret all previous characters.
[Edited to get the endians the right way around]
Very interesting. It sounds like your “third person view from nowhere” vs the “first person view from somewhere” is very similar to something I was thinking about recently. I called them “objectively distinct situations” in contrast with “subjectively distinct situations”. My view is that most of the anthropic arguments that “feel wrong” to me are built on trying to make me assign equal probability to all subjectively distinct scenarios, rather than objective ones. eg. A replication machine makes it so there are two of me, then “I” could be either of them, leaving two subjectively distinct cases, even if on the object level there is actual no distinction between “me” being clone A or clone B. [1]
I am very sceptical of this ADT. If you think the time/place you have ended up is unusually important I think that is more likely explained by something like “people decide what is important based on what is going on around them”.
[1] My thoughts are here: https://www.lesswrong.com/posts/v9mdyNBfEE8tsTNLb/subjective-questions-require-subjective-information
I am having trouble following you. If little-omega is a reference frame I would expect it to be a function that takes in the “objective world” (Omega) and spits out a subjective one. But you seem to have it the other way around? Or am I misunderstanding?
I would guess that Lorentz’s work on deterministic chaos does not get many counterfactual discovery points. He noticed the chaos in his research because of his interactions with a computer doing simulations. This happened in 1961. Now, the question is, how many people were doing numerical calculations on computer in 1961? It could plausibly have been ten times as many by 1970. A hundred times as many by 1980? Those numbers are obviously made up but the direction they gesture in is my point. Chaos was a field that was made ripe for discovery by the computer. That doesn’t take anything away from Lorentz’s hard work and intelligence, but it does mean that if he had not taken the leap we can be fairly confident someone else would have. Put another way: If Lorentz is assumed to have had a high counterfactual impact, then it becomes a strange coincidence that chaos was discovered early in the history of computers.
I don’t think the negative correlation between doctors and patients opinion of the drugs is surpassing.
Rat poison would probably get a low score from both doctors and patients. However, nobody is being prescribed rat poison as an anti-depressant so it doesn’t appear in your data. Why is nobody being prescribed rat poison? Well, doctors don’t prescribe it because they think its a bad idea, and patients don’t want it anyway.
In order for any drug to appear in your dataset somebody has to think it is good. So every drug should have net-approval from at least one out of the doctors and patients. Given this backdrop a negative correlation is not surprising.
I think you are slightly muddling your phrases.
You are richer if you can afford more goods and better goods. But not all goods will necessarily change price in the same direction. Its entirely possible that you can become richer, but that food prices grow faster than your new income. (For example, imagine that your income doubles, that food prices also double, but prices of other things drop so that inflation remains zero. You can afford more non-food stuff, and the same amount of food, so you are richer overall. This could happen even if food prices had gone up faster than your income.)
I think a (slightly cartoony) real life example is servants. Rich people today are richer than rich people in Victorian times, but fewer rich people today (in developed countries) can afford to have servants. This is because the price of hiring servants has gone up faster than the incomes of these rich people. So it is possible for people to get richer overall, while at the same time some specific goods or services become less accessible.
Maybe a more obvious example is rent (or housing in general). A modern computer programmer in Silicon valley could well be paying a larger percentage of their income on housing than a medieval peasant. But, they can afford more of other things than that peasant could.
I think it depends on the meaning attached to the word “love”. There are two possibilities:
I “love” this, because it brings me benefits. (it is instrumental in increasing my utility function, like chocolate ice cream)
I “love” this, in that I want it to benefit. (Its happiness appears as a parameter in my utility function)
You can have a partner or family member who means one the other or both to you. The striking dementia example from Odd Anon is a case where the dementia makes it so the person’s company no longer makes you happy, but you may still be invested in them being happy.
The first one is obviously never going to be unconditional. The second one seems like it could be unconditional in some cases. In that a parent or spouse really wants their child or partner to be happy even if that child or partner is a complete villain. Its not even necessary that they value the child/partner over everything else, only that they maintain a strong-ish preference for the them being happy over not being happy, all else being equal.
Imagine you have a machine that flicks a classical coin and then makes either one wavefunction or another based on the coin toss. Your ordinary ignorance of the coin toss, and the quantum stuff with the wavefunction can be rolled together into an object called a density matrix.
There is a one-to-one mapping between density matrices and Wigner functions. So, in fact there are zero redundant parameters when using Wigner functions. In this sense they do one-better than wavefunctions, where the global phase of the universe is a redundant variable. (Density matrices also don’t have global phase.)
That is not to say there are no issues at all with assuming that Wigner functions are ontologically fundamental. For one, while Wigner functions work great for continuous variables (eg. position, momentum), Wigner functions for discrete variables (eg. Qubits, or spin) are a mess. The normal approach can only deal with discrete systems in a prime number of dimensions (IE a particle with 3 possible spin states is fine, but 6 is not.). If the number of dimensions is not prime weird extra tricks are needed.
A second issue is that the Wigner function, being equivalent to a density matrix, combines both quantum stuff and the ignorance of the observer into one object. But the ignorance of the observer should be left behind if we were trying to raise it to being ontologically fundamental, which would require some change.
Another issue with “ontologising” the Wigner function is that you need some kind of idea of what those negatives “really mean”. I spent some time thinking about “If the many worlds interpretation comes from ontologising the wavefunction, what comes from doing that to the Wigner function?” a few years ago. I never got anywhere.
Something you and the OP might find interesting is one of those things that is basically equivalent to a wavefunction, but represented in different mathematics is a Wigner function. It behaves almost exactly like a classical probability distribution, for example it integrates up to 1. Bayes rule updates it when you measure stuff. However, in order for it to “do quantum physics” it needs the ability to have small negative patches. So quantum physics can be modelled as a random stochastic process, if negative probabilities are allowed. (Incidentally, this is often used as a test of “quantumness”: do I need negative probabilities to model it with local stochastic stuff? If yes, then it is quantum).
If you are interested in a sketch of the maths. Take W to be a completely normal probability distribution, describing what you know about some isolated, classical ,1d system. And take H to be the classical Hamiltonian (IE just a function for the system’s energy). Then, the correct way of evolving your probability distribution (for an isolated classical, 1D system) is:
Where the arrows on the derivatives have the obvious effect of firing them either at H or W. The first pair of derivatives in the bracket is Newton’s Second law (rate of change of Energy (H) with respect to X is going to turn potential’s into Forces, and the rate of change with momentum on W then changes the momentum in proportion to the force), the second term is the definition of momentum (position changes are proportional to momentum).Instead of going to operators and wavefunctions in Hilbert space, it is possible to do quantum physics by replacing the previous equation with:
Where sin is understood from Taylor series, so the first term (after the hbars/2 cancel) is the same as the first term for classical physics. The higher order terms (where the hbars do not fully cancel) can result in W becoming negative in places even if it was initially all-positive. Which means that W is no longer exactly like a probability distribution, but is some similar but different animal. Just to mess with us the negative patches never get big enough or deep enough for any measurement we can make (limited by uncertainty principle) to have a negative probability of any observable outcome. H is still just a normal function of energy here.
(Wikipedia is terrible for this topic. Way too much maths stuff for my taste: https://en.wikipedia.org/wiki/Moyal_bracket)
Also, the OP is largely correct when they say “destructive interference is the only issue”. However, in the language of probability distributions dealing with that involves the negative probabilities above. And once they go negative they are not proper probabilities any more, but some new creature. This, for example, stops us from thinking of them as just our ignorance. (Although they certainly include our ignorance).
That is really interesting, thanks for sharing it. Japan has such a reputation for this that it is fascinating to see that eg. Italians and Spaniards work more hours despite the European working time directive.
Question 2: Why do Japanese automakers operate some factories in America instead of importing everything from Japan?
Subsidies and tariffs are the reason for this. Importing a finished car into the USA from Japan will face tariffs (taxes) imposed by the US government. A car built in the USA by a Japanese company will benefit from subsidies given out by the US government. The entire point of these policies is so that the factories are moved from eg. Japan to the USA. This makes the cars more expensive overall (if it was cheaper to make them in America the companies would have already done this without the government intervention), but has the effect of moving employment from Japan to the USA.
Overall it makes the whole world a tiny bit poorer than it could have been, and moves a small amount of employment from one place to another. (IE it may be net-good for the USA, assuming that you think workers are more important than consumers, but it is pure-bad for Japan and net-bad for the world. Its kind of like Defecting in a two-country prisoner’s dilemma, with the added twist that the prize you get for defecting is only beneficial to some of your citizens and is harmful to others.)
Question 1: Why would an hour of labor from an American be worth 2x as much as an hour from a Japanese employee?
An hour spent working in Japan produces half the value of an hour spent working in the USA. This does not mean that an American can do the same job twice as fast as a Japanese person. For one thing, American’s have different jobs than Japanese people. For another: Japanese people have office culture rules that nobody goes home before the boss. This means that at 8pm on a weekday, if the boss hasn’t gone home yet, nobody else has either (a common occurrence). But they all got brain-fried from overwork at 6pm and have been twiddling their thumbs for two hours. The statistics will then show is the total number of hours (including those two hours wasted) and the output, and they will come out on-average less productive than Americans. That is just one example of the kind of thing that can move the dial on that. As another example, one hour of labour from a man with a combine harvester really is worth dozens or hundreds times as much as one hour of labour from a man with a scythe. Japan’s use of technology may be behind the US’s. Not with literal scythes. Maybe faxes instead of emails. Things like that.
Good ideas.
A gripe of mine in the same vein is that my old employer had this idea that in any public facing communication “numbers up to ten must be written in words, 11 or higher in digits”. I think its a common rule in (for example) newspapers. But it leads to ludicrous sentences like “There are either nine, ten or 11 devolved administrations depending on how they are counted.” It drives me completely crazy, either the whole list should be words or numerals, not a mix.