The Brussels meetup is typically in English.
Plasmon
Make the immigrants pay higher taxes.
Immigrants to the UK pay, on average, more taxes than native Brits.
I wonder if there’s any significance to the fact that there is only one (known?) material out of which a space elevator could be constructed on Earth. Most planets, I would expect, will be such that either no material is strong enough, or several materials are strong enough. Earth lies just on the boundary.
the same institutions that educate students also give them their degrees and grades.
In the Netherlands and Flanders, there is a government organisation responsible for maintaining the quality of higher education. It is true that they do not grade the students, but they do look at past exams to verify if the questions asked were sufficiently difficult and if they were graded properly. They use experts from competing universities to help them make their judgements.
The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can’t find any references for that right now)
You said
There is in fact such a thing as a learner with a sub-Turing hypothesis class. Such a learner with such a primitive output as “in the class” or “not in the class” does not engage in world optimization, that is: its actions do not, to its own knowledge, skew any probability distribution over future states of any portion of the world outside itself. … Now, what we’ve been proposing as an Oracle is even less capable.
which led me to think you were talking about an oracle even less capable than a learner with a sub-Turing hypothesis class.
It would truly have no outputs whatsoever, only input and a debug view. It would, by definition, be incapable of narrowing the future of anything, even its own internal states.
If the hypotheses it considers are turing-complete, then, given enough information (and someone would give it enough information, otherwise they couldn’t do anything useful with it), it could model itself, its environment, the relation between its internal states and what shows up on the debug view, and the reactions of its operators on the information they learn from that debug view. Its (internal) actions very much would, to its own knowledge, skew the probability distribution over future states of the outer world.
Physics is turing-complete, so no, a learner that did not consider turing complete hypotheses could not model the outer environment.
This thing you are proposing, an “oracle” that is incapable of modeling itself and incapable of modeling its environment (either would require turing-complete hypotheses), what could it possibly be useful for? What could it do that today’s narrow AI can’t?
I agree, of course, that none of the examples I gave (“primitive classifiers”) are dangerous. Indeed, the “plans” they are capable of considering are too simple to pose any threat (they are, as you say, not Turing complete).
But, that doesn’t seem to relevant to the argument at all. You claimed
a very general learning algorithm with some debug output, but no actual decision-theory or utility function whatsoever built in. That would be safe, since it has no capability or desire to do anything.
You claimed that a general learning algorithm without decision-theory or utility function is possible. I pointed out that all (harmless) practical learning algorithms we know of do in fact have decision theories and utility functions. What would “a learning algorithm without decision-theory or utility function, something that has no desire to do anything” even look like? Does the concept even make sense? Eliezer writes here
A string of zeroes down an output line to a motorized arm is just as much an output as any other output; there is no privileged null, there is no such thing as ‘no action’ among all possible outputs. To ‘do nothing’ is just another string of English words, that would be interpreted the same as any other English words, with latitude.
The most popular algorithm for SVMs does plan a sequence of actions, complete with heuristics as to which action to take. True, the “actions” are internal : they are changes to some data structure within the computer’s memory, rather than changes to the external world. But that is not so different from e.g. a chess AI, which assigns some heuristic score to chess positions and attempts to maximize it using a decision algorithm (to decide which move to make), even though the chessboard is just a data structure within the computer memory.
All existing learning algorithms I know of, and I dare say all that exist, have at least an utility function, and also something that could be interpreted as a decision theory. Consider for example support vector machines, which explicitly try to maximize a margin (that would be the utility function), and any algorithm for computing SVMs can be interpreted as a decision theory. Similar considerations hold for neural networks, genetic algorithms, and even the minimax algorithm.
Thus, I strongly doubt that the notion of a learning algorithm with no utility function makes any sense.
What other option is there? Preferring to act out of rational calculation towards a goal would put the speaker among those who “believe that solutions emerge from judicious study of discernible reality”, i.e. the very people he’s arguing against. We are left to guess what alternative decision procedure the speaker is proposing. eli_sennesh’s interpretation is one possibility, do you have another?
I think you are underestimating the importance of being well-informed for being an “agenty mover and shaker”. Look at this guy and these guys for example. Were they “agenty movers and shakers” ? They certainly tried!
Even the famous Sun Tzu, hardly a passive observer himself, devotes an entire chapter to the importance of being well-informed.
in other words, remoulding the world to suit your own purposes
Denotationally, that seems like a reasonable interpretation. It sets off solipsism warnings in my head, possibly because I know some self-described sollipsists who really are fond of using that kind of phrasing.
However, the speaker could have chosen to say this in a more straightforward way, as you do. Something like “We are an empire now, we have the power to remould parts of the world to better suit our purposes”. And yet, he did not. Why not? This is not a rhetorical question, I’m open to other possible answers, but here’s what I think:
I don’t think it is very controversial that this quote is arguing against “the reality-based community”. It is trying to give the impression that “acting … to create our own reality” is somehow contradictory to “solutions emerging from your judicious study of discernible reality”. In reality of course, most or all effective attempts at steering reality towards a desired goal are based on “judicious study of discernible reality”. He is trying to give the impression that he (“we, an empire”) can effectively act without consulting, or at least using the methods of, the “reality-based community”. He doesn’t say that denotationally, because it’s false.
These quotes don’t seem similar to me at all.
The first quote talks of “changing” reality, the second talks of “creating” it , making the first seem like an encouragement to try and change reality, and the second like sollipsism (specifically, “creating our own reality”).
The second also seems very dismissive of the need to think before you act, the first much less so (if at all).
As the amount of harm threatened gets larger, the probability that the mugger is maximizing approaches unity.
It seems entirely possible that some finite nonzero fraction of “matrix lords” is capable of carrying out arbitrarily large threats, or of providing arbitrarily large rewards (provided that the utility function by which you judge such things is unbounded).
As the probability that the mugger is engaged in maximizing approaches unity, the likelihood that the mugger’s claim is true approaches zero.
It is possible that the mugger is maximizing while still telling the truth.
That’s an extraordinarily weak “based on”
That is true, and intentional. It is far from obvious that the connection between the fictional Jesus and the (hypothetical?) historical one is any less tenuous than that (1) . The comparison also underscores the pointlessness of the debate : just as evidence for Vlad Dracul’s existence is at best extemely weak evidence for the existence of vampires, so too is evidence for a historical Jesus at best extremely weak evidence for the truth of Christianity.
Was Bram Stoker’s Dracula “based on” a real person ? Possibly, given an extremely weak interpretation of “based on”.
What does it take for a fictional character to be based on a real person? Does it suffice to have a similar name, live in a similar place at a similar time? Do they have to perform similar actions as well? This has to be made clear before the question can be meaningfully answered.
So far as I know, we have no good theories apart from “it just happened to work out that way”. This is pretty unsatisfying.
Keep in mind that the earth-moon distance is not constant. The moon appeared larger in the past and will appear smaller in the future.
Indeed. Truly reversing the measurement would involve also forgetting what the result of the measurement was, and Copenhagenists would claim this forgotten intermediate result does not count as a “measurement” in the sense of something that (supposedly) collapses the wave function.