This is amazingly great (I laughed out loud at the “Biceps-controlled socialism” graph), but I feel it only works because the original study authors made the rookie mistake of publishing their data set. The only time I have wanted to try something similar (for the brain mosaic paper), I hoped it would be possible to extract the data from the diagram, but no, the jpg in the pdf is sufficiently low-resolution that it doesn’t work.
Pfft
Ok, so we should identify criminals with “thoughts of committing deadly violence, regardless of action”, and then “many of these offenders should probably never be released from confinement”. A literal thought crime.
Yes, there will always be some off-by-one errors, so the best we can hope for is to pick the convention that creates less of them. That said, the fact that most programming languages choose the zero-based convention seems to suggest that that’s the best one.
There’s also the revealed word of our prophet Dijkstra: EWD83 - Why numbering should start at zero.
Yeah.
I think the orthodox MIRI position is not that logical proofs are necessary, or even the most efficient way, to make a super-intelligence. It’s that humans need formal proofs to be sure that the AI will be well-behaved. A random kludgy program might be much smarter than your carefully proven one, but that’s cold comfort if it then proceeds to kill you.
I mean, you can literally build an EmDrive yourself, but you definitely can’t measure the tiny thrust yourself. You still need to trust the experts there, no?
Apart from the question about whether it produces any thrust, there is also the question of whether it will lead to any interesting scientific discoveries. For example, if it turns out that there was a bit of contaminating material that evaporated, the thrust is real but the space-faring implications are not...
Eh, elections seem hard to update on though. Before the election, I thought Clinton was 70% likely to win or so, because that’s what Nate Silver said. Then Trump won. Was I wrong? Maybe, but it’s not statistically significant at even p = 0.05.
So just looking at U.S. presidential elections, you’ll never have enough data to see if you’re calibrated or not. I guess you can seriously geek out on politics, and follow and make predictions for lots of local and foreign elections also. At that point, it’s a serious hobby though, I’m much more of a casual.
any suggestions?
It sounds pretty spectactular!
I found one paper about comets crashing into the sun, but unfortunately they don’t consider as big comets as you do—the largest one is a “Hale-Bopp sized” one, which they take to be 10^15 kg (which already seems a little low, Wikipedia suggests 10^16 kg.)
I guess the biggest uncertainty is how common so big comets are (so, how often should we expect to see one crash into the sun). In particular, I think the known sun-grazing comets are much smaller than the big comet you consider.
Also, I wonder a bit about your 1 second. The paper says,
The primary response, which we consider here, will be fast formation of a localized hot airburst as solar atmospheric gas passes through the bow-shock. Energy from this airburst will propagate outward as prompt electromagnetic radiation (unless or until bottled up by a large increase in optical depth of the surrounding atmosphere as it ionizes), then in a slower secondary phase also involving thermal conduction and mass motion as the expanding hot plume rises.
If a lot of the energy reaching the Earth comes from the prompt radiation, then it should arrive in one big pulse. On the other hand, if the comet plunges deep into the sun, and most of the energy is absorbed and then transmitted via thermal conduction and mass motion, then that must be a much slower process. By comparison, a solar flare involves between 10^20 and 10^25 J, and it takes several minutes to develop.
See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That’s why T can be primitive recursive.
The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That’s not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that’s not quite the thing that’s usually meant by a universal machine.
I think the fact that you just need one loop is interesting, but it doesn’t go as far as you claim; if an angel gives you a program, you still don’t know how many steps to run it for, so you still need that one unbounded loop.
I’m not sure what you have in mind for treatment of risk in finance. People will be concerned about risk in the sense that they compute a probablility distribution of the possible future outcomes of their portfolio, and try to optimize it to limit possible losses. Some institutional actors, like banks, have to compute a “value at risk” measure (the loss of value in the portfolio in the bottom 5th percentile), and have to put up a collateral based on that.
But those are all things that happen before a utility computation, they are all consistent with valuing a portfolio based on the average of some utiity function of its monetary value. Finance textbooks do not talk much about this, they just assume that investors have some preference about expected returns and variance in returns.
It is very standard in economics, game theory, etc, to model risk aversion as a concave utility function. If you want some motivation for why, then e.g. the Von Neumann–Morgenstern utility theorem shows that a suitably idealized agent will maximize utility. But in general, the proof is in the pudding: the theory works in many practical cases.
Of course, if you want to study exactly how humans make decisions, then at some point this will break down. E.g. the decision process predicted by Prospect Theory is different from maximizing utility. So in general, the exact flavour of risk averseness exhibited by humans seems different from what Neumann-Morgenstern would predict.
But at that point, you have to start thinking whether the theory is wrong, or the humans are. :)
She eventually gives him the carrot pen so he can delete the recording, no?
I took the survey!
I write down one line (about 80 characters) about what things I did each day. Originally I intended to write down “accomplishments” in order to incentivise myself into being more accomplished, but it has since morphed into also being a record of notable things that happened, and a lot of free-form whining over how bad certain days are. It’s kindof nice to be able to go back and figure out when exactly something in the past happens, or generally reminisce about what was going on some years ago.
There is Omilibrium, which does the vote SVD-ing thing.
He is a historian, studying history of science. That subject is exactly about studying what people (scientists) are saying.
I think Shane Legg’s universal intelligence itself involves Kolmogorov complexity, so it’s not computable and will not work here. (Also, it involves a function V, encoding the our values; if human values are irreducibly complex, that should add a bunch of bits.)
In general, I think this approach seems too good to be true? An intelligent agent is one which preforms well in the environment. But don’t the “no free lunch” theorems show that you need to know what the environment is like in order to do that? Intuitively, that’s what should cause the Kolmogorov complexity to go up.
So in the case of this particular paper, some other researchers did ask for the raw data, and they got it and carried out exactly the analysis I was interested in knowing about. So I guess it’s a happy ending, except I didn’t get to write a tumblr post back when there was a lot of buzz in the media about it. :)