Open Thread, Sept 5. - Sept 11. 2016
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
Today, I uploaded a sequence of three working papers to my website at https://andershuitfeldt.net/working-papers/
This is an ambitious project that aims to change fundamental things about how epidemiologists and statisticians think about choice of effect measure, effect modification and external validity. A link to an earlier version of this manuscript was posted to Less Wrong half a year ago, the manuscript has since been split into three parts and improved significantly. This work was also presented in poster form at EA Global last month.
I want to give a heads up before you follow the link above: Compared to most methodology papers, the mathematics in these manuscripts is definitely unsophisticated, almost trivial. I do however believe that the arguments support the conclusions, and that those conclusions have important implications for applied statistics and epidemiology.
I would very much appreciate any feedback. I invoke “Crocker’s Rules” (see http://sl4.org/crocker.html) for all communication regarding these papers. Briefly, this means that I ask you, as a favor, to please communicate any disagreement as bluntly and directly as possible, without regards to social conventions or to how such directness may affect my personal state of mind.
I have made a standing offer to give a bottle of Johnnie Walker Blue Label to anyone who finds a flaw in the argument that invalidates the paper, and a bottle of 10-year old Single Scotch Malt to anyone who finds a significant but fixable error; or makes a suggestion that substantially improves the manuscript.
If you prefer giving anonymous feedback, this can be done through the link http://www.admonymous.com/effectmeasurepaper .
[Forgetting Important Lessons Learned]
Does this happen to you?
I’m not necessarily talking about mistakes you’ve made which have caused significant emotional pain, and you’ve learnt an important lesson from. I think these tend to be easier to remember. I’m more referring to personal processes you’ve optimized or things you’ve spent time thinking about and decided the best way to approach that type of problem. …and then a similar situation or problem appears months or years later and you either (a) fail to recognize it’s a similar situation, (b) completely forget about the previous situation and your previous conclusion as the best way to handle this type of problem, or (c) fail to even really think about the new situation as a problem you may have previously solved.
Anyone else frustrated by this?
Do you have any strategies you use to overcome this problem?
Something that might help is writing things down. For example, if you had a notebook where you wrote down things that you had figured out, every time you came to a conclusion, and any details that might help you remember why you came to that conclusion. Then, whenever you encounter a problem you can read over the notes in the notebook from a variety of topics, and see if any of them match. Also, if you keep it updated frequently then when you go to write something down that would be another opportunity to review the notebook and see if anything matches something else that’s bothering you.
Or if physically writing things in a notebook isn’t something you want to do, sending yourself an email could work in a similar way.
In general, I’ve found that writing things down helps with remembering things.
I’m not sure how much this would help you in particular, but spaced repetition, when done right, should jog your memory and make you work to recall something just before you would have forgotten it.
In order to learn and remember to apply useful concepts, I have an Anki deck containing the following:
http://mcntyr.com/52-concepts-cognitive-toolkit/
https://medium.com/@yegg/mental-models-i-find-repeatedly-useful-936f1cc405d#.qtvgobrvk
The LW Wiki
Miscellaneous, domain-specific tools and knowledge
Our local downvoter doesn’t seem to have noticed that it doesn’t have any effect on what is read or commented on as karma restrictions on posting also seem to have been lifted.
I wrote a thing that turned out to be too long for a comment: The Doomsday Argument is even Worse than Thought
The argument a) depends on you being a random observer and b) makes only a statistical prediction. If you are one of those early or late observers you will come to the wrong conclusions. Probability doesn’t help you at that point at all.
Also: Once you start creating more and more variants of the the same pattern (double DA, other time frames) you don’t really make the probability worse, you are doing p-hacking. That doesn’t change reality and you can’t reliably learn anything about reality.
I might be in a simulation and such checks might change my prior for it but it is quite low anyway. Like so many other strange and newfangled ways reality could be like theistic.
Yes it’s a statistical prediction. The 90% confidence interval will be correct for 90% of people who use this method. 10% will be wrong. Apriori you are 9 times more likely to be in the first group than in the second.
I don’t see this as an alternative variant to fudge the numbers. To me this seems to be the correct way to do the calculation. This makes the above argument correct, that 90% of people that use this argument will be correct.
Whereas the original version assumes you are randomly given human, which is obviously incorrect. As most humans would not be born at a time where this kind of statistical knowledge exists. Just the fact that you ask the doomsday argument, shows there is something special about you, and puts you into a different reference class.
Why?
Because as I said, most humans would never even think of the doomsday argument. So the argument can’t apply to them. In order to get the mathematical guarantee that 90% of people who use the argument will be correct, you need to restrict your reference class only to people familiar with the argument.
More generally, the copernican principle is that there is nothing particularly special about this exact moment in time. But we know there is something special. The modern world is very different than the ancient world. The probability of these ideas occurring to an ancient person, are very different than to a modern person. And so any anthropic reasoning should adjust for that probability.
“The probability of these ideas occurring to an ancient person...”
In the ancient world it was very common to predict the imminent end of the world.
And in my own case, before ever having heard of the Doomsday argument, the argument occurred to me exactly in the context of thinking about the possible end of the world.
So it doesn’t seem particularly unlikely to occur to an ancient person.
How so?
See the Gospels for examples.
That’s what I thought you meant. But Christianity has existed for less than 4% of humanity’s time, and what we ordinarily call “the ancient world” started 3000-6000 years earlier.
On the other hand fear of and end of the world (as they knew it) seems to be not unlikely at any time.
Creating reference classes as small as you like is easy. But the predictive power diminishes accordingly...
Double negatives exist to help hide what you’re saying. If it’s somewhat likely, show me a single clear example that predates Christianity. The story of Noah says such a flood will never happen again. The Kali Yuga was supposed to last more than 400000 years.
There two possible ways to try to rebut the DA which is using those who knows about DA as a reference class, but they still don’t work (copying my comment from similar thread about the Universe origin):
A. One possible counterargument here is following. Imagine that any being has rank X, proportional to its complexity (or year of birth). But there will be infinitely many beings which are 10X complex, 100X complex and so on. So any being with finite complexity is in the beginning of the complexity ladder. So any may be surprised if it is very early. So there is no surprise to be surprised.
But we are still should be in the middle of infinity, but our situation is not so—it looks like we have just enough complexity to start to understand the problem, which is still surprising.
B. Another similar rebuttal: imagine all being which are surprised by their position. The fact that we are in this set is resulted only from definition of the set, but not from any properties of the whole Universe. Example: All people who was born 1 January may be surpised that their birthday coincide with New Year, but it doesn’t provide them any information about length of the year.
But my birthday is randomly position inside the years (September) and in most testable cases mediocracy logic works as predicted.
“In most testable cases mediocracy logic works as predicted.”
Exactly, but it doesn’t need testing anyway, because we know that it is mathematically necessary. That is why I think the DA is true, and the fact that everyone tries to refute it just shows wishful thinking.
(And by the way, my birthday is in late June and would work well to predict the length of the year. And my current age is almost exactly half of the current average lifespan of a human.)
I agree about wishful thinking.
Ok, but what if your were Omega, how you could try to escape from DA curse?
One way is to reset own clock, or to reduce number of people to 1, so DA will work, but willnot result in total extinction.
Another way is to hope that some another strange thing like quantum immortality will help “me” to survive.
Wouldn’t that mean surviving alone?
Looks like alone ((( not nice.
But QI favours the worlds there I am more able survive, so may be I will have some kinds of superpowers or will be uploaded. So I will probably able to create several friends, but not many, as it would change probability distribution in DA, and so makes this outcome less probable.
Another option is put me in the situation where my life or death is univocally connected with live of group of people (e.g. if we all in one submarine). In with case we will all survive.
This interpretation of DA puts you in the class of really intelligent observers, who are able to understand statistic, logic etc. It helps us to solve so called reference class problem in most natural way. It helps to exclude animals, unborn children, Neanderthals from the class of beings from which we are randomly chosen.
Unfortunately it shortens most probable time of existence of our class.
The problem with the doomsday argument is that it is a correct assignment of probabilities only if you have the very small amount of information specified in the argument. More information can change your predictions—the prediction you would make if you had less information gets overridden by the prediction that uses all your information.
Let’s use the example of picking a random tree. Suppose you know about the existence of tree-diseases that make trees sick and more likely to die, and you know that some trees are sick and some are healthy. You pick a random tree and it is ten years old and sick. You now should update your prediction of the average tree age toward 10 years, but you cannot expect that you have picked a point near the middle of this tree’s life. Because you know it is sick, you can expect it to die sooner than that.
Well I don’t have any statistics on how long civilizations last. It’s true that DA is a very naive way of estimating, but I think at this time all we can make are very naive estimates.
I think that when I add other information, like my belief in x-risk, the estimates get even worse. It really does feel like our civilization is at it’s peak and it’s all downhill from here. Between how many dangerous technologies we invent, to how much finite resources we are using up, the estimates given by DA certainly feel plausible.
As I commented on your blog, I thought of the argument myself before I heard it from anyone, and I am unlikely to be unique in that, which makes things slightly less bad, since there were probably lots of people (in absolute numbers) who thought of it.
I also tried to rise the problem of DA-aware observers before but never meet any understanding, and now we have 3 people, who seems to be speaking about the same thing.
We could name them (us) double-DA-aware-obserevers. That is the ones who know about DA and also know that DA is applicable only to observers, who knows about DA.
If we apply all the same logic of DA to this small group, we could get even worse results (and also go into eternal recursion). But it will not happened, as Carter in 1983 was part of double-DA-aware-observers, and it was 33 years ago. So even if number of double-DA-aware-observers is growing exponentially, we still may have 10-20 years before the end. (And it is in accordance with some strong AI timing expectations)
If one strong AI replace all humans or solves DA-paradox it will solve DA without total extinction.
I wrote about DA (for group of people who know about DA) here: http://lesswrong.com/lw/mrb/doomsday_argument_map/
It doesn’t influence timing of the possible catastrophe much. Most people who goes deep in the topic have read (and publish) about DA. So we could use number of the articles about DA to get known distribution of DA-aware-observers. I suspect it is exponential. This means that the medium rank-number of DA-aware-observers are near the end of the time line.
The best rebuttal I know here is that the question is not asked randomly but conditionally dependent of my position. That is when I ask “why I am so early?” I know that I am “early”, and I am not random person from all group. So all who are early will be more interested in DA and ask the question more frequently. But does it completely compensate DA?
For example people who was born 1 January may be more interested in question why their birthday is so early in the year. But it doesn’t prevent me using my birthday date to estimate the number of days in a year. (My birthday is on 243th day so the number of days in a year is less than 500 with 50 per cent probability)
I came to the same conclusion: that DA should be applied only to the people who knows about DA. And it makes it worse. There are two ways how to apply it DA-people. Using years and using number of people (rank) who know about DA. The second way is even worse as the number of people is growing exponentially, and so we are near the end of the group of people who knows about DA.
It may not mean extinction, but strong and universally accepted DA rebuttal, or drastic fail of the number of people.
Global catastrophe in next 10-50 years unfortunately seems to be most likely explanation.
(The ideas were also known to Carter in 1983 when he presented anthropic principle and already knew about DA. At this time he was the only person on Earth who knew about DA and he understood its implication for small class of people who knows about DA and it really made him worry about extinction soon. I forget where I read about it.)
I would rather see the doomsday argument as a version of sleeping beauty.
Different people appear to have different opinions on this kind of arguments. To me, the solution appears rather obvious (in restrospect):
If you ask a decision theory about advice on decisions, then there is nothing paradoxical at all, and the answer is just an obvious computation. This tells you that “probability” is the wrong concept in such situations; rather you should ask about “expected utility” only, as this is much more stable under all kind of anthropic arguments.
Academic Publishing without Journals
By setting up the journals with a bitcoin type blockchain, you could reward reviewers, and citations. SciCred !
just a stub to think about
https://hack.ether.camp/#/idea/academic-publishing-without-journals
I’m going to understand ‘academic publishing without journals’ broadly.
Has anyone else found themselves implicitly predicting how much academic research would be performed and published virtually in the future? Think things like Sci-Hub, academic blogosphere, increase in number of preprints, etc. Can you ‘exit’ mainstream academia?
So, will the amount of academic research published in the blogosphere increase over time? It’s hard for me to imagine a non-interesting answer to this question. If it increased, wouldn’t that be interesting? And if it stayed about the same or decreased, wouldn’t you wonder why?
Er, could the blogosphere function as a compliment or partial substitute of the traditional academic community? If not, why not?
The first intuitive objection that crosses my mind goes something like: “You cannot build the Large Hadron Collider in your backyard.” My immediate reply is, “Not yet.” But that seems fair. You probably can’t, at the moment, do experiments that currently require massive amounts of coordination and funding without going through the existing academic system.
But with that said, I think to ask, “What fraction of academic work is like that?” I’m sure it varies by field, but if a substantial amount of academic work can be done virtually, then you could still be pretty competitive from a keyboard, insofar as this is a competition. Think of all the theoretical work and lit reviews and data interpretation.
The second intuitive objection that crosses my mind goes something like: “Science is Big, blogosphere is small.”
This is often a good metaphor. Boulders beat bunnies in crushing contests. Walmart beats Mom and Pop in profit contests. Bigger is often better. But I think this is a very weird case of competition between firms. For one, academic papers aren’t really fungible. It doesn’t matter if you buy a Snickers bar from Mom and Pop or if you buy one from Walmart, and if it’s cheaper at Walmart, then you’ll buy it there. In the academic case, you could switch out one copy of my paper on high-pressure chocolate pudding dynamics with another, and everything would be fine, but you cannot in general exchange my paper on pudding dynamics with someone else’s paper on the neuropsychology of flower aesthetics and get the same work done. And beyond that, there is an ancient tradition of making scientific data public. You could conceivably obtain a monopoly on Snickers retailing, but if there were some effort to systematically starve any particular group of data, I imagine that being pretty controversial. I also don’t imagine that being very practical, at least not any more than any other historical effort to prevent people from copying digital information. It’s too easy to copy. The traditional publishers are sort of already trying this right now and miserably failing.
After I ask whether or not the idea is worth exploring, I think to ask why it hasn’t happened more, and whether or not it can be made to happen more.
Robin Hanson did an answering session on Quora recently, and he said that at times he’s had to consider whether he would publish something traditionally or blog about it. So, to think about it one way, why do academics currently prefer publishing in mainstream academia to publishing in the blogosphere?
Are the only reasons, ‘an overwhelming majority of academics publish in mainstream academia’, ‘publishing in mainstream academia is more prestigious than publishing in the blogosphere’, and ‘mainstream academia is the only way to get paid for doing research’? If those were the only reasons, this might mean that the relative unpopularity of the blogosphere for academic publication is just a matter of inertia. This seems like it would be a bad thing. And that’s an honest question: “Why do academics currently prefer publishing in mainstream academia to publishing in the blogosphere?” It’s easy for me to imagine that I’ve missed something because I’ve never experienced what it’s like to be an academic.
To think about it in another way, do academics have incentives to publish in the blogosphere as opposed to mainstream academia?
I think this is already answered weakly in the affirmative. Academic criticism is published in mainstream academia, but there are also bloggers who publish criticism in the blogosphere pseudonymously/anonymously instead in order to preserve their careers and reputations, and to avoid entry costs (which can include personal effort). This is sad. Also, it may be easy to lump this sort of work in with journalism, but the criticism can get quite technical, and it would have been published in mainstream academia otherwise.
The thing I found interesting about the link in the parent comment wasn’t the reputation system, but the idea of post-publication review becoming widespread. I would be so happy if scientists just blogged at each other instead of waiting for enough content to make a passable paper. I would be so happy if, instead of publishing another paper, scientists just wrote kickass comments and let the OP update what they had. There’s no reason you couldn’t assign status for that. That’s happened historically here. The Stack Exchange Network is also kind of like this. Post-publication review makes me excited. You can get status without maximizing a number (or, publication count; maybe there’s karma.)
Most generally: there are already people who prefer the Internet to academia; I wonder just how far and in what way you’d have to push things to make this preference ordering more common.
Is this true even if I am citing the article just to show how bad it is?
Also, this would preserve (maybe even increase) many existing problems with the scientific publishing, such as splitting your ideas into many little articles, citing your friends who cite you in return, etc.
Has anyone else tried the new Soylent bars? Does anyone who has also tried MealSquares/Ensure/Joylent/etc. have an opinion on how they compare with other products?
My first impression is that they’re comparable to MealSquares in tastiness. Since they’re a bit smaller and more homogeneous than MealSquares (they don’t have sunflower seeds or bits of chocolate sticking out of them), it’s much easier to finish a whole one in one sitting, but more boring to make a large meal out of them.
Admittedly, eating MealSquares may have a bit more signalling value among rationalists, and MealSquares cost around a dollar less per 2000 kcal than the Soylent bars do. I’ll probably stick with the Soylent bars, though; they’re vegan, and I care about animals enough for that to be the deciding factor for me.
I have some in transit to me (in Australia) will report on the OT when they get here.
I’ve tried Joylent Twennybars but not any other product. I found them very tasty, but I also got some stomachache. Maybe I would get used to that, though, or I may have been eating them incorrectly.
I have tried Joylent, but not any other stuff, so I can’t compare. (I suspect they all taste like muesli.)
I like having a vegan option, but I also prefer the liquid version, so I don’t care about the taste or price of the bars.
However, for introducing other people to the concept of universal food, bars are probably better than the liquid stuff. As in, people are more likely to take a food that seems less weird.
Can anyone give a steelman version of Chomsky’s anti-statistics colorless green ideas sleep furiously argument? The more I think about it, the more absurd it seems.
Here’s my take on Chomsky’s argument:
The phrase “colorless green ideas sleep furiously” is extremely improbable from a statistical perspective.
However, it is also entirely consistent with the rules of grammar.
Therefore, one cannot use statistical reasoning to draw conclusions about the rules of grammar.
Naively, this seems plausible enough. But consider the following mirror-image argument, about physics:
Consider the event “a sword fell out of the sky” (not the sentence, the physical event).
This event is extremely improbable from a statistical perspective.
However, it is entirely consistent with the laws of physics; if a sword were dropped out of a hot air balloon, if would obviously fall to the ground.
Therefore, one cannot use statistical reasoning to draw conclusions about the laws of physics.
The mirror image argument seems patently absurd, but it follows the exact same line of reasoning.
The problem is that “the laws of physics” is a phrase that means two different things:
(1) Rules like Newton’s laws.
(2) What the world does.
Rules of grammar are formal rules like (1) and not about (2). You can use statistics to see whether (1) matches (2) but there’s a lot that can be said with math about formal rules. You can use math to show that two kind of theories are equivalent or that they are different.
There’s a lot about the rules of mathematics that you can’t learn via statistics. You can’t prove NP=/=P by looking at a bunch of examples and using statistics.
Chomsky invented the Chomsky hierachy and from what I remember from my classes at university there’s no statistics involved in that way of thinking about grammar. It’s still a model of grammar important enough to be taught in computer science classes.
The passage seems silly. It is easy to make statistical models that contradict Chomsky’s claim. But I think he means something else, that whether a sentence is grammatical, while not a binary, admits sharply discrete levels. The concept of grammar cuts human understanding of language at its joint and statistical understanding is largely on the other side. At least, that is the claim; I think introspection is difficult and usually turns statistical understanding into an illusion of discreteness.
Is this, actually, the argument Chomsky made? Because looking at Wikipedia, it says
which is a very different thing.
This looks like a coding language that can take distributions, and output co-ordinates or arguments, and then perform algorithmic transforms on them. i think. Way over my haircut. Looks like an efficient way to work with sets tho.
“These source-to-source transformations perform exact inference as well as generate probabilistic programs that compute expectations, densities, and MCMC samples. The resulting inference procedures run in time comparable to that of handwritten procedures. ”
https://arxiv.org/abs/1603.01882
If they actually achieved what they claim—a calculus for directly obtaining inference algorithms from input models—then it is very cool.
My gut mathematical feeling is that the full solution to that problem is intractable, but even a partial semi-solution could still be cool.
Troll research from Reddit
Researchers from Stanford and Microsoft have created a scientific methodology to measure the frequency and output of intransigent commenters in social networks – those whose contributions are written as statements of fact, rather than contributions to a discourse”
pdf dentifying Dogmatism in Social Media: Signals and Models
http://128.84.21.199/pdf/1609.00425v1.pdf
Exercise
In previous attempts at exercising, I’ve never lost weight; never gained in strength or dexterity; never even gotten a second wind. I’ve never met any significant exercise goals. But in the long-term, exercise is still worthwhile. So I’m trying something new: Changing my self-conception to Someone Who Exercises Daily. No expectations of any gains, rewards, or second winds. Just someone who slogs through the painful routine each day, every day.
I’ve picked a routine that can be done anywhere, with no equipment: Burpees, in descending sets (ie, for 15, do 5, 4, 3, 2, 1; for non-triangular numbers, add at the start, ie for 17, do 7, 6, 3, 2, 1), adding 1 per day. (Supposedly, burpees work all the major parts of the body, etc, etc.) If-and-when I make it to 30-descending, I’ll consider changing it up.
Today: Did 5 burpees.
Also today: Set up https://twitter.com/DPR_exercise to semi-publicly keep track. (Or, as an RSS feed, http://twitrss.me/twitter_user_to_rss/?user=dpr_exercise .)
how do you think your first failure at this will come about? before retrying?
Hm… I expect that I’m going to mess up my scheduling, or some family thing will crop up unexpectedly, and I won’t wake up in time to run through the exercise routine before having to go out on some errand, on a day with so much going on that it’s well past sunset before I have another chance to do some jumping up and down.
sounds similar to my experience. You might want to set up a review process once a week (Sunday night is usually convenient) so that you can check if you are still on track with the goal.
Minor failures or small setbacks will happen, finding a way to ensure they don’t become drawn out failures will likely keep you on track.
Oddly enough, part of what set my thoughts in this particular direction was watching the Gene Wilder movie, “The Frisco Kid”, about an observant Jew travelling across the Wild West; which made me start thinking about people who perform regular religious observances. For many people, failing to perform a regular ritual on one day doesn’t mean they give it up entirely—they may do something corrective about the lapse, but simply resume the regular ritual on its next scheduled time. I’m hoping I can leverage a secular version of that mindset for my own purposes, at such time as it becomes necessary.
I suspect this is one of those critical things that separate losers from winners. Statistically, sooner or later something unexpected is going to mess up your schedule. People who decide in advance “if I fail once, it means I have failed forever, so there is no point in trying anymore” are just giving themselves an excuse to stop.
It makes sense to worry about failing today or tomorrow, but it doesn’t make sense to worry about having failed yesterday. If you failed to perform the ritual yesterday, maybe do some penance, and maybe reflect and improve your planning, but don’t use it as a cheap excuse for not doing the ritual today. (Not even in the way “I am not doing the ritual anymore until I improve my plans”. Nope; do the ritual at the predetermined time, and use some other time to reflect plan.)
I did burpees for a while. Now I’m not sure what’s the point. Sure, you get tired quickly, but you don’t feel strong or fast while doing it. Lifting average-heavy stuff for 10-15 reps, or running 100m dashes with short breaks, is much more fun for me because I can go all out and push against my limit of power, not just my fatigue.
I am, close to literally, starting from scratch, exercise-wise. I’ve started a thread in the bodyweight subreddit about a better exercise regimen, and am entirely literally in a mall right now looking for some exercise bands to let me do more types of movements in the area I have to exercise in. You could think of the burpees as a placeholder while I work out something better.
Consider couch to 5k. It’s a good basic place to start.
Expect at least 2 months before you are feeling fit. You get to feel progress in the sense of “could run a bit further today” each new run. the two most important things:
You will get hurt. You will injure yourself. If you think you won’t you definitely will, and you will have to take rest because of it. It might set you back days or weeks. But it’s better to rest.
You actually make gains to muscle and strength on your days off. When the muscles repair and grow back. Because of this—most of the pages on the fitness subreddits will have a 3-4 days a week routine with rest days in between. Rest days are important.
FYI, my current plan is 3 days a week of bodyweight exercises (working up to /r/bodyweight’s recommended routine), and 3 days a week of jogging starting with a pre-C25k program. How well that plan succeeds, well, we’ll just have to see. :)
I appreciate the suggestion, and have added a few bookmarks on C25k to my to-read pile.
Given my experience today, attempting to add some of the warm-up routines from /r/bodyweight’s Recommended Routine, I’m afraid that it seems that I have a bit of a ways to go before I’m even at the level of ‘couch’. Ah well; I knew what I was getting into when I started this, and am picking up as much theory as I can to adapt pre-existing routines to my circumstances.
That rounds up to 19, not 17.
So it does. Obviously, for 17, I should have wrote 6, 5, 3, 2, 1. (And for 19, it would be 6, 5, 4, 3, 1.)
Has anyone here had success with the method of loci (memory palace)? I’ve seen it mentioned a few times on LW but I’m not sure where to start, or whether it’s worth investing time into.
I considered it seriously and spent hours working on a method, then realised that if I reduced stress I could improve my memory in a way that I didn’t need it.
Using memory palaces to solve your problems might be a harder path to doing whatever it is that you want to do than other existing paths that might seem aversive. (I was hoping to remember names better. Now I just choose to remember them with more ferocity)
Brienne has, example blog post here. She’d probably recommend it.
I personally am satisfied with some much more simplistic memory techniques like trying to remember context when I remember something (e.g. try to remember the sight and feel of sitting in a certain classroom to remember content of a lecture), and using repetition more judiciously (remembering to use peoples’ names right after I hear them is the biggest use, but this is also good for shopping lists etc).
I also suspect that practice using any sort of deliberate memorization at all will improve some sort of general deliberate memorization skill, so you might find that practicing mnemonics or method of loci improves your memory in a general way.
“Your Memory: How it works and how to improve it” by Higbee is an excellent book on memory. It dispels some common memory myths, clarifies concepts (e.g. short vs long term memory), teaches general principles on how to remember information (meaningfulness, organisation, association, visualization, etc.), as well as specific memory techniques (method of loci, peg mnemonic, first letter mnemonic, etc.).
Spreadhseeting ems
With some help, and a lot of simplifying assumptions, I’ve put together the spreadsheet at https://docs.google.com/spreadsheets/d/1CZ565cTZh0upkiE3aeHqOVxybI38Fwn6zSpPT2YVGGY/edit#gid=0 , mainly to help me work out some background structure for some scifi stories.
Can you think of any ways to improve the sheet?
CRISPR Screen in Toxoplasma Identifies Essential Apicomplexan Genes.
http://www.ncbi.nlm.nih.gov/pubmed/27594426?dopt=Abstract#
this is great, they screened and found the invasion factor in the invasive organs that allow Toxo and malaria to slither inside cells and infect.
“Secondary screens identify as an invasion factor the claudin-like apicomplexan microneme protein (CLAMP), which resembles mammalian tight-junction proteins and localizes to secretory organelles, making it critical to the initiation of infection. CLAMP is present throughout sequenced apicomplexan genomes and is essential during the asexual stages of the malaria parasite Plasmodium falciparum. These results provide broad-based functional information on T. gondii genes and will facilitate future approaches to expand the horizon of antiparasitic interventions.”
Virus-Like Nanoparticle Vaccine Confers Protection against Toxoplasma gondii.
http://www.ncbi.nlm.nih.gov/pubmed/27548677?dopt=Abstract#
Upon challenge infection with a lethal dose of T. gondii (ME49), all vaccinated mice survived, whereas all naïve control mice died. Vaccinated mice showed significantly reduced cyst load and cyst size in the brain.
SETI at X-ray Energies—Parasitic Searches from Astrophysical Observations
Was published in 1997, but uploaded to arXiv too make it more accessible.
“If a sufficiently advanced civilization can either modulate the emission from an X-ray binary, or make use of the natural high luminosity to power an artificial transmitter, these can serve as good beacons for interstellar communication without involving excessive energy costs to the broadcasting civilization. ”
http://arxiv.org/abs/1609.00330
Havn’t heard anything lately on this, so maybe never got funded? But i still have postulated that the universal communication standard should be spectral lines, not binary bits.
Why was https://en.wikipedia.org/wiki/Bachir_Boumaaza
Rejected to attend the http://lesswrong.com/lw/n33/european_community_weekend_2016/ because he couldn’t stay the 3 days?
[I’m one of the organizers of this years Community Weekend.]
Since the community weekend’s focus is on building community, we decided to only accept speakers who are also regular participants. We offered this option to Bachir but didn’t receive any further messages.
A quick and dirty inspiration: that value alignment is very hard is showed by very intelligent people going neoreactionary or convert to a religion from atheism. They usually base their ‘move’ on a coherent epistemology, with just some tiny components that zag instead of zigging.
Comment at will, I’ll expand with more thoughts.
Both neoreactionaries and other people want a functional society, they just disagree on how to get it; both transhumanists and religious people want to live forever, they just disagree about whether life extension or preying to get into the best afterlife is the best way to go about it. Perhaps they have the same terminal goals?
OTOH if religious people have ‘faith’ as a terminal value and atheists do not, then yes, this may be more of a problem. If people have ‘other people follow my values’ as a terminal value, this could be a very large problem.
I think that one of the problem with CEV is that we still have to say who’s values we want to extrapolate, and it almost defines the outcome.
For example CEV of values of C. elegance is not equal to human values.
The main problem here is how we defined to be human. Here are most value groups differ. Will we include unborn children? Neanderthals?
I mean that different value system have different definition of human beings, and “human CEV” is different for green party member who include all animals in “humans”, and to neo-something who may exclude some people from definition of humans. So AI could correctly calculate CEV but to the wrong group.
I agree with a twist: I think CEV will mostly be uninteresting, because it will have to integrate so many conflicting points of view that it will mostly come up with “do nothing at all”.
Brainstorming a bit, I would say that value alignment is impossible unless an AI becomes actively part of the moral landscape: instead of being a slave to a hypothetical human uber-value, it will need to interact heavily with humans and force them to act so to reveal their true preferences or collaborate to generate a utopia.
I would also add that diversity of human values is value itself and important part of human culture. While it often results in conflicts and sufferings, homogenous New World may be even more unpleasant. CEV indirectly implies that there is one value system for all.
It is also based on orthogonality thesis which may be true for some AIs but not true for human brains. There is no separate values in human brain. “Value” is the way to describe human behaviour, but we can’t show value neurons, or value texts in human brain. Our emotions, thoughts and memories are interconnected with our motivation and reward system, as well as all our cultural background. So we can’t uploading values without uploading a human being.
The orthogonality thesis will be false for AIs for the same reasons you rightly say it is false for humans.
We have the desire for certain things. How do we know we have those desires? Because we notice that when we feel a certain way (which we end up calling desires) we tend to do certain things. So we call those feelings the desire to do those things. If we tended to do other things, but felt the same way, we would call those feelings desires for the other things, instead of the first things.
In the same way, AIs will tend to do certain things, and many of those tendencies will be completely independent of some arbitrary goal. For example, let’s suppose there is a supposed paperclipper. It is a physical object, and its parts will have tendencies. Consequently it will have tendencies as a whole, and many of them will be unrelated to paperclips, depending on many factors such as what it is made of and how the parts are put together. When it has certain feelings (presumably) and ends up tending to do certain things, e.g. suppose it tends to think a long time about certain questions, it will say that those feelings are the desire for those things. So it will believe that it has a desire to think a long time about certain topics, even if that is unrelated to paperclips.
In case of AIs some orthogonality is possible if its goal system is preserved in the separate text-block, but only until some extent.
If he is sophisticated enough he would ask: “What is paperclip? Why it is in my value box?” And such ability to reflection (which is needed for self-improving AI) will blur the distance between intelligence and its values.
Orthogonality is also under question if the context change. If meaning of words and world model change, values need to be updated. Context is inside AI, not in his value box.
The separate text-block can illustrate what I am saying. You have an AI, made of two parts, A & B. Part B contains the value box which says, “paperclips are the only important thing.” But there is also part A, which is a physical thing, and since it is a physical thing, it will have certain tendencies. Since the paperclippiness is only in part B, those tendencies will be independent of paperclips. When it feels those tendencies, it will feel desires that have nothing to do with paperclips.
Maybe, but they could still operate in harmony to reduce the world to a giant paperclips.
“They could still operate in harmony...” Those tendencies were there before anyone ever thought of paperclips, so there isn’t much chance that all of them would work out just in the way that would happen to promote paperclips.
Are we still talking about an AI that can be programmed at will?
I am pointing out that you cannot have an AI without parts that you did not program. An AI is not an algorithm. It is a physical object.
Of course, everything is a physical object. What I’m curious about your position is if you think that you can put any algorithm inside a piece of hardware, or not.
I’m afraid that your position on the matter is so out there for me that without a toy model I wouldn’t be able to understand what you mean. The recursive nature of the comments doesn’t help, also.
You can put any program you want into a physical object. But since it is a physical object, it will do other things in addition to executing the algorithm.
Well, now you got me curious. What other things a processor is doing when executing a program?
I gave the example of following gravity, and in general it is following all of the laws of physics, e.g. by resisting the pressure of other things in contact with it, and so on. Of course, the laws of physics are also responsible for it executing the program. But that doesn’t mean the laws of physics do nothing at all except execute the program—evidently they do plenty of other things as well. And you are not in control of those things and cannot program them. So they will not all work out to promote paperclips, and the thing will always feel desires that have nothing to do with paperclips.
I don’t think that “tendencies” is right wording here. Like a calculator has a keyboard and a processor. The keyboard provides digits for multiplication, but the processor doesn’t have any own tendencies.
But it still could define context
The processor has tendencies. It is subject to the law of gravity and many other physical tendencies. That is why I mentioned the fact that the parts of an AI are physical. They are bodies, and have many bodily tendencies, no matter what algorithms are programmed into them.
This is akin to saying that since your kidneys work by extracting ammonia from your blood, you have some amount of desire to drink ammonia.
No. It is akin to saying that if you felt the work of your kidneys, you would call that a desire to extract ammonia from your blood. And you would.
But that doesn’t just reduce to a will of survival? I know that extracting certain salts from my blood is essential to my survival, so I want that my parts that do exactly that continue to do so. But I do not have any specific attachment to that functions just because a sub-part of me executes this. If I were in a simulation, say, even if I knew that my simulated kidneys worked in the same way I know I could continue to exist even without that function.
From the wording of your previous comments, it seemed that an AI conscious of its parts should have isomorphic desires, but the problem is that there could be many different isomorphisms, some of which are ridiculous.
We do in fact feel many desires like that, e.g. the desire to remove certain unneeded materials from our bodies, and other such things. The reason you don’t have a specific desire to extract ammonia is that you don’t have a specific feeling for that; if you did have a specific feeling, it would be a desire specifically to extract ammonia, just like you specifically desire the actions I mentioned in the first part of this comment, and just as you can have the specific desire for sex.
I feel we are talking past each other, because reading the comment above I’m totally confused about what question you’re answering...
Let me rephrase my question: if I substitute one of the parts of an AI with an inequivalent part, say a kidney with a solar cell, will its desires change or not?
Let me respond with another question: if I substitute one of the parts of a human being an inequivalent part, say the nutritional system so that it lives on rocks instead of food, will the human’s desires change or not?
Yes, they will, because they will desire to eat rocks instead of what they were eating before.
The same with the AI.
Yes, and expecting it any one will try to narrow the group with which CEV-AI will align. For example we have a CEV-AI with following function: given presented with group X, he will calculate CEV(X). So if I want to manipulate it, I will present him as small group as possible, which will mostly include people like me.
But it is also possible to imagine superCEV, which will be able to calculate not only CEV, but also the best group for CEV.
For me CEV is not pleasant because it kills me as subject of history. I am not the one who rules the universe, but just a pet for which someone else knows what is good and bad. It kills all my potential for future development, which I see as natural evolution of my own values in complex environment.
I think also that there are infinitely many possible SEVs for a given group, which means that any CEV is mostly random.
The CEV-AI will have to create many simulations to calculate CEV. So most observers will still find themselves in CEV-modeling simulation where they fight another group of people with different values, and see their own values evolving someway. If the values of all people will not converge, the simulation will be terminated. I don’t see it as peasant scenario also.
My opinion is that most natural solution for aligning problem is to create AI as extension of his creator (using uploading and Tool AIs, or running AI-master algorithm on biological brain). In this case any evolution of goal system will by mine evolution, so there will be no problem of aligning something with something. There will be always one evolving thing. (Another question if it will be safe for others or stable).
This is a very weak argument, since it might simply show that coherent epistemology leads to everyone becoming religious, or neo-reactionary.
In other words, your arguments just says “very intelligent people disagree with me, so that must be because of perverse values.”
It could also just be that you are wrong.
Uhm, it’s evident I’ve not made my argument very clear (which is not a surprise, since I’ve wrote that stub in less than a minute).
Let me rephrase it in a way that addresses your point:
“Value alignment is very hard, because two people, both very intelligent and with a mostly coherent epistemology, can radically differ in their values because of a very tiny difference.
Think for example to neoreactionaries or rationalists converted to religions: they have a mostly coherent set of beliefs, often diverging from atheist or progressive rationalists in very few points.”
I am saying that is a difference in belief, not in values, or not necessarily in values.
They want to steer the future in a different direction than what I want, so by definition they have different values (they might be instrumental values, but those are important too).
Ok, but in this sense every human being has different values by definition, and always will.
http://lesswrong.com/r/discussion/lw/nwu/reality_is_arational/
-8 Yet still no one being able to refute my arguments.
It’s really silly.
If we lived in the kind of universe where learning didn’t help, where drawing more-correct conclusions and fitting your behavior better to the environment didn’t help, then evolution and indeed biological life wouldn’t work either. The kind of world where maps don’t have anything to do with territories is a dead world, one in which there are no maps because becoming a mapper is worthless.
“Every communication is inaccurate” is inaccurate, but more-or-less true. “Every communication is equally inaccurate” is very much less accurate, to the point of being a flat lie.
After all, if communication didn’t work (better than non-communication), then there wouldn’t be any. The existence of falsehoods implies the existence of (relatively accurate) truths, because if there wasn’t such a thing as a truth, then why would we bother making up lies? A lie only fools anyone because they believe it to be a truth.
It’s a question of truth, you can be truthful that these things are maps, and yet nothing really changes, except you are meta-aware and no longer ignorant, it takes a radical shift in open-mindedness to see this, and I don’t mean it at all in the way you might think. Because the you, and many others, are so attached to their maps, they are stuck in a lie, in a maximum-security prison forever, but only because they have beliefs and maps.
I do agree with you on these points, it’s obvious, right? I know, but just because it’s obvious doesn’t mean it is the territory, because it’s not, it’s not at all. They are concepts we have, and because of our ego we can’t really let go of it.
You can discover the world, which is not a map, and because you’ll see how obvious this is, there won’t be anything more perfect, not even 1000 x sunsets or 1000 x “aha”-moments can compare.
It depends on what perspective, from the arational they are all the same, and there’s nothing wrong with that. But now I am writing as a map, from the arational. Not the arational itself, it exists beyond reasoning or understanding. Do you understand how different things matter in relation to what?
Communication can work just fine, but it’s still communication, you might believe that some communication is better than others, and it might be the case. But, it’s still what it is in relation to the arational. I don’t see any issue in some extreme rationalist having high expected value +, making excellent decisions, yet still being aware in relational to the arational it is all the same.
Sure, this is obvious, but everything is still the way it is. We’re conditioned to say “THIS IS ME” “IM THIS” “IM SPECIAL” it screams “ego” “identity” and so forth. The universe IS, all of reality IS, and we’re IT.
Under what circumstances would you say “someone was able to refute my arguments”? Evidently when you are convinced by them. So the “fact” that no one can refute your arguments is not something impressive, but merely shows that you are stubborn.
It’s because I don’t think the arguments made can be refuted, because of the inherent nature of the subject. It’s like denying subjective experience although that’s all you really have. (from an odd perspective)
I am convinced but I wonder how to properly argument for it, as no one wants to continue to argue. Maybe I want to teach others, but I am not sure if I am lying to myself.