“A screenshot of the Wikipedia home page, Halloween version” please.
philip_b
I suspect that after doing this enough times, you will internalize that generating words like this leads to them being published under your name and thus your filters will adapt and make this way of writing anxiety-inducing too.
There is an inconsistency in the formatting of (simulated) user feedback. Some are formatted as Username: “XXX”, e.g.
Anna Salamon: “I can imagine a world where earnest and honest young people learn what’s rewarded in this community is the most pointed nitpick possible under a post and that this might be a key factor in our inability to coordinate on preventing existential risk”.
while others are formatted as Username said “XXX”, e.g.
Eliezer Yudkowsky said “To see your own creation have its soul turned into a monster before your eyes is a curious experience.”
Yes, indeed.
Is the following a correct reformulation of your problem?
Say that a magician’s pure strategy is a function on finite bitstrings which returns either “go on” or “stop” with a positive integer and a frequency of zeros.
A magician’s mixed strategy is a probability distribution over magician’s pure strategies. (Btw, I’m not sure what kind of sigma-algebra is suitable here.)
A player’s mixed strategy is a probability distribution over infinite bitstrings.
A magician’s mixed strategy together with a player’s mixed strategy define, in the obvious way, a probability distribution over outcomes “magician wins” (i.e., the magician’s prediction was within 1%) and “player wins”.
You’re claiming that there is a magician’s mixed strategy s.t. for every player’s mixed strategy, magician wins with probability at least 0.99.
I’ve formalized this problem in Coq with mathcomp. Yeah, idk why I decided to do this.
From mathcomp Require Import all_ssreflect. Set Implicit Arguments. Unset Strict Implicit. Unset Printing Implicit Defensive. Definition hats_assignment : Type := nat -> bool. Definition policy_of_one_person (n : nat) : Type := {policy : hats_assignment -> bool & forall ha : hats_assignment, let ha' := fun m => if m == n is true then ~~ (ha m) else ha m in policy ha == policy ha'}. Definition strategy : Type := forall n : nat, policy_of_one_person n. Lemma ex_good_strategies : exists s : strategy, forall ha : hats_assignment, exists N : nat, forall n : nat, n >= N -> (projT1 (s n)) ha == ha n. Proof. Admitted.
If you want to solve it, you need to replace
Admitted.
with an actual proof.
Do people have common knowledge of an ordering of all of them?
Today I have attended Kolmogorov seminar, an online math seminar that’s usually held in Russian, but sometimes is held in English, organized by two professors of Lomonosov Moscow State University. There, they complained that their German colleagues have refused to give their upcoming talks because the seminar is associated with Russia, the agressor in the war. I’m annoyed that things like this happen. But this particular case will probably not be a big problem because the organizers plan to just remake the seminar and make it hosted (as if that means anything given that it’s online) by a French university with which one of the organizers is affiliated.
A rationalist friend of mine told me that the severity of covid increases significantly with the amount of virus particles inhaled. Is this the case? If yes, this might give us reason to use some cheap precautions some more—if not collectively, then on an individual level.
I’m not sure what you mean by CDT- and EDT-style counterfactuals. I have some guesses but please clarify. I think EDT-style counterfactual means, assuming I am a bayesian reasoner, just conditioning on the event “TAI won’t come”, so it’s thinking about the distribution P(O | TAI won’t come).
One could think that the CDT-counterfactual you’re considering means thinking about the distribution P(O | do(TAI doesn’t come)) where do is the do operator from Judea Pearl’s do calculus for causality. In simple words, this means that we consider the world just like ours but whenever someone tries to launch a TAI, god’s intervention (that doesn’t make sense together with everything we know about physics) prevents it from working. But I think this is not what you mean.
My best guess of what counterfactual you mean is as follows. Among all possible sets laws of physics (or, alternatively, Turing machines running which leads to existence of physical realities), you guess that there exists a set of laws that produces a physical reality where there will appear a civilization approximately (but not exactly) like hours and they’ll have a 21-st century approximately like hours, but under their physical laws there won’t be TAI. And you want to analyze what’s going to happen with that civilization.
Since you suspect that your proposed scheme is cheating, I’ve come up with another cheating scheme which you can employ in most situations to avoid needing absolute pitch. Remember what approximate notes your lowest and highest singing frequencies are. Then when you want to identify a note, hum or quietly sing one of them or both and compare the note you want to identify with them.
As for other countries, in Russia fluvoxamine is by prescription only, but I guess it’s not controlled very strictly, since it’s easy to visit a few pharmacies until one of them seels it to you even though you say you forgot your prescription list at home.
So, I don’t know the first thing about American education, so I wonder, can a parent just let their kid stay at home, skip school and do whatever the kid wants while all this crap is happening? If yes, why aren’t they doing it if things are this bad?
A better type signature would be
(List<resource>, List<goal-state>, List<prerequisites>)
. An even better type signature would be a directed acyclic graph where nodes are skills or knowledge areas, edges are dependencies, parentless nodes are prerequisites, childless nodes are end goals, and each non-prerequisite node has a list of resources associated with it.
I see you’re frustrated with reactions of LessWrongers. You want to convince us that your model is important. Here are three questions I don’t know the answer to which affect how important I deem your model.
How much novelty does it have compared to existing brain inspired machine learning models? Perhaps you can write a “Related Work” section like in academic articles to answer this question.
If the answer to the first question is “there is a lot of novelty”, what implications does this have? Why is it important?
Are there likely to be practical benefits to using your kind of models instead of neural networks? I.e., are they likely to be faster on inference, cheaper to train, generalize better, or something else, if now research time was put into the study of them?
Apart from this question, I have a recommendation for you. Try to publish your work at a conference or in a journal. If the answer to the first question is at least “as much novelty as there is in a typical brain-inspired model article”, I think your result is publishable given good results on PI-MNIST and PI-FashionMnist. However, it would probably be important to use another dataset that is inherently permutation invariant as a benchmark in order to impress reviewers. This’ll give you prestige, satisfaction, etc. You can also visit a conference about brain-inspired ml or a workshop about brain-inspired ml at a large ml conference, present your work there and take to people. You can also try to find where such people hang out on the internet and take to them there though I expect there’s no such public good place.
What is BP in BP/SGD?
So, as I see it, there are three possible different fairness criteria which define what we can compare your model with.
Virtually anything goes—convolutions, CNNs, pretraining on imagenet, …
Permutation-invariant models are allowed, everything else is disallowed. For instance, MLPs are ok, CNNs are forbidden, tensor decompositions are forbidden, SVMs are ok as long as the transformations used are permutation-invariant. Pre-processing is allowed as long as it’s permutation-invariant.
The restriction from the criterion 2 is enabled. Also, the model must be biologically plausible, or, shall we say, similar to the brain. Or maybe similar to how a potential brain of another creature might be? Not sure. This rules out SGD, regularization that uses norm of vectors, etc. are forbidden. Strengthening neuron connections based on something that happens locally is allowed.
Personally, I know basically nothing about the landscape of models satisfying the criterion 3.
Btw, a multilayer perceptron (which is a permutation invariant model) with 230000 parameters and, AFAIK, no data augmentaiton used, can achieve 88.33% accuracy on FashionMNIST.
I’ve asked https://www.lesswrong.com/users/connor_flexman, a person who has previously estimated the number of expected days of life lost from covid (see for example https://www.lesswrong.com/posts/GzzJZmqxcqg5KFf8r/covid-and-the-holidays), how to update his estimates for the assumption that 100% of covid is omicron. On december 27, he told me that covid means 10 expected weeks of life lost for an average 30 to 50 y.o. person. And that to update https://microcovid.org estimates, you should multiply by 3.5 because omicron is more infectious and divide by 3 if you’ve had a booster.
I want a way to translate microcovids into expected hours of my life lost, taking into account death, long covid, short term illness and all other effects of covid, if there are any.
Mathematical Components—a textbook on formal proofs using Coq with MathComp library.