Open thread, Oct. 19 - Oct. 25, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Luke quotes from Superforecasting on his site:
“Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources—from the New York Times to obscure blogs—that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives.”
wishing to get his hands on this program.
Does anyone know of something similiar, or who this ‘Doug’ may be? I wonder if this may be as simple as simply asking this man. The book gives ‘Doug Lorch’ as his full name. Google gives a facebook account as first result, but I have no idea if this is an actual match.
The facebook account links to a blog : http://newsandold.blogspot.de/ The blog indicates that he’s politically knowledgeable. The facebook account said that he worked at IBM when the superforcaster was reported as a retired computer programmer.
I think he’s your man ;) The facebook account only has 12 friends so it doesn’t seem to be very active. But it’s worth a try to contact him.
Did anything come from this? Would love to see that, too!
Someone created an /r/controlproblem subreddit.
Actually very high quality subreddit. I’m impressed.
I never realized how many people there are who say “it’s a good thing if AI obliterates humanity, it deserves to live more than we do”.
On some level, the question really comes down to what kind of successors we want to create; they aren’t going to be us, either way.
That depends on whether you plan to die.
If I didn’t, the person I become ten thousand years from now isn’t going to be me; I will be at most a distant memory from a time long past.
It will still be more “me” than paperclips.
Than paperclips, yes. Than a paperclip optimizer?
Well… ten thousand years is a very, very long time.
It’s a perfectly reasonable position when you consider that humanity is not going to survive long-term anyway. We’re either going extinct and leaving nothing behind, evolving into something completely new and alien, or getting destroyed by our intelligent creations. The first possibility is undesirable. The second and third are indistinguishable from the point of view of the present (if you assume that AI will be developed far enough into the future that no current humans will suffer any pain or sudden death because of it).
You might still want your children to live rather than die.
The questions asked there mostly seem basic and answered by some sequence or another. Maybe someone should make a post pointing out the most relevant sequences so those people can be thinking about the unsolved problems on the frontier?
Great idea. I commission you for the task! (You might also succeed in collecting effective critiques of the sequences.)
If you post an article there, it is subtitled “self.ControlProblem”. Seems like many people there have a problem with self control. :D
If any moderator is reading this: user denature123 has posted large quantities of ugly spammy comments; if s/he and they could be blown away, that would be nice.
The scientists encouraging online piracy with a secret codeword
...
Amusingly, right now the hashtag seems to be dominated by people talking about the article/phenomenon, not by people trying to get pdfs.
I’m shocked, shocked...
Think you have a finely callibrated and important information diet? Imagine if you had the world’s strongest intelligence agency tailor the news for you. Well, you don’t have to imagine, because the president’s daily briefs have just been declassified. If you’re interested, you can collaborate with researchers to get a better handle on it. Enjoy.
For convenience, here’s a link to the individual briefs as separate PDF files, for anyone else who doesn’t want to download all 34MB at once. (I thought the Flickr page might have a few convenient, face-on snapshots of pages from the briefs, but the CIA reckoned it was more important to take 5 photos of a woman wheeling a trolley of briefs through the CIA lobby. #thanksguys)
I suspect daily presidential briefings from the CIA are finely (as in carefully & deliberately) calibrated but not that well calibrated (as in being accurate, representative and not tendentious). The CIA doubtless has incentives to misrepresent some things to the president — and indeed a president probably has some incentives to allow/encourage being misled about certain things!
It’s an intersting data set but I don’t think it’s useful as a primary source. Given that the freshest “news” in the pile is from 1977, I don’t think the term “news” is appropriate. If you are interested in what happened 40 years ago it might be better to read more recently written history books than contemporary intelligence analysis.
What makes a good primary care physician and how do I go about finding one?
Off the top of my head, the most reliable way would be to ask another senior medical professional—senior as they would tend to have been in the same geographic area for a while and know their colleagues, plus have more direct contact with primary care physicians. Also, rather than asking “who should i see as my primary care physician”, you could ask “who would you send your family to see?”. This might help prevent them from just recommending a friend/someone with whom they have a financial relationship. I note that this would be relatively hard to do unless you already know a senior medical professional.
Another option would be to ask a medical student (if you happen to know any in your area) which primary care physicians teach at their university and they would recommend. Through my medical training I have found that teaching at a medical school to be weak-to-moderate evidence of being above average. Asking a medical student would help add a filter for avoiding some of the less competent ones, strengthening this evidence
I think lay-people’s opinions correlate much more strongly with how approachable and nice their doctor is, as opposed to competence. Doctor rating sites could be used just to select for pleasant ones, if you care about that aspect.
(caveats: opinion based; my experience is limited to the country i trained in; I am junior in experience)
This is a great question, and I’m glad that you asked, since I am interested in hearing what people think about this as well. I suppose that word of mouth is generally superior to, say, just searching for a primary care doctor through your insurance provider’s website, but I don’t have any more specific ideas than that.
Personally, I can, and often have, put off going to the doctor due to akrasia, so I put a bit of extra weight on how nice the doctor is—having a nice doctor lowers the willpower-activation-energy needed for me to make an appointment. I also think that willingness to spend time with patients is important, but I’d be more likely to think this than the average person—I’m pretty shy, so I’ll often tell my doctors that I don’t have any more questions (when I actually do) if they seem like they’re in a hurry, so as to not bother them.
Ask everyone you know; ask for their recommendations, and ask why they make those recommendations. Most of the answers you get will not be worth much, but look for the good answers; you only need one.
The trick here is that while it is nearly impossible to find the perfect doctor through any method, you are only looking for a good doctor. Any reasonable recommendation followed by a quick Google search (Google allows reviews on doctors, and most established doctors in larger cities will have at least one or two) to weed out the bad apples will do. This is one of those situations where the perfect is the enemy of productivity.
On what basis do you belief that publically posted reviews of doctors correlate with the quality of the medical ability of the doctor?
I don’t assume much of a reliable correlation; but it doesn’t require much. Once you have found a likely few doctors, it is worth finding out if a lot of people hate one of them—particularly if they explain why. It’s basically a very cheap way to filter out potential problems. If I felt that there was a strong correlation, I would have recommended starting with the Google reviews—after all, Googling is much more time expedient than talking to people.
For context, of the few doctors I sampled on Google review, I found none of them to have anything significant posted in their review. The worst I saw was “receptionist was very rude!”
Given two or more okay choices of doctors given by friends and acquaintances, I think that it is fair to apply this sort of filter, even if you have weak evidence that it is effective. The worst that will happen is that you make the other good choice, rather than the good choice you would have made. The best that might happen is that you avoid an unpleasant experience (well, the best is that you lower your chances of dying through physician error). This calculation may change if you have only one doctor under consideration.
If a doctor tells you the straight truth about what you have to change in your life that can be unpleasant. I think it can lead to bad reviews. I don’t know whether it’s useful to avoid those doctors on the other hand. Defensive medicine doesn’t seem to be something to strive for.
Yes, but if you are reading the reviews, you will be able to determine if they are useful to you. Many will not be. You should certainly be applying the same critical thinking skills that you used when hearing recommendations from your friends in the first place.
I am assuming that there are useful negative comments, although I haven’t seen any yet. (My interpretation was that this was because I was only looking at good doctors to start with). If you have a useful comment on any doctor you have seen, please do add it -- it could save someone some trouble.
First of all, competence and skill.
Just like everyone else, doctors vary in how good they are. Unfortunately, there is a popular meme (actively promulgated by the doctors guild) that all doctors are sufficiently competent so that any will do. That’s… not true.
Given this, it’s shouldn’t be surprising that finding out the particular doctor’s competency ex ante is hard to impossible (unless s/he screwed up so hard, s/he ran into trouble with the law or the medical board). Typically you’ll have to rely on proxies (e.g. the reputation of the medical school s/he went to).
Beyond that, things start to depend on what do you need a doctor for. If you have a condition to be treated, you probably want a specialist in that (even primary care physicians have specializations). If you want to run a lot of tests on yourself, you want a doctor who’s amenable to ordering whatever tests you ask him for. Etc., etc.
I don’t have any surefire methods that don’t require a very basic working knowledge of medicine, but a general rule of thumb is the physician’s opinion of the algorithmic approach to medical decision making. If it is clearly negative, I’d be willing to bet that the physician is bad. Not quite the same as finding a good one, but decent for narrowing your search.
Along with this, look for someone who thinks in terms of possibilities rather than certainties in diagnoses.
All assuming you’re looking for a general practitioner, of course. I wouldn’t select surgeons based on this rule of thumb, for instance.
If you’re looking for someone who simply has good tableside manner, then reviews and word of mouth do work.
Any particular evidence in favor of this approach, anecdotal or otherwise?
Late reply, I know!
Standardizing decisions through checklists and decision trees has, in general, shown to be useful if the principles behind those algorithms are based on a reliable map. In medical practice, that’s probably the evidence-based medicine approach to screening, diagnosis, and treatment.
In addition, all this assumes that patient management skills are not a concern, since it’s not something I personally consider important (from the point of view of a patient) when considering a provider of any medical or technical service. If you typically require more from your physician (and many people do see physicians as societal pillars and someone to talk to their non-medical problems about) than medical evaluation and treatment, then it is something to keep in mind.
Anecdotally, every medical provider I’ve encountered who was a vocal opponent of clinical decision support systems had a tendency to jump to dramatic conclusions that were later proven wrong.
This is one of the few studies on the subject that isn’t behind a paywall.
Given the absence of a boasting thread recently. Here’s a little boasting:
Helped monitor and proxy for these publically/non-internet user accessible (meaning multiple people can use and post with them through me) Reddit accounts
https://www.reddit.com/user/Magnusoz
https://www.reddit.com/user/fruitheart
Identified a potent research interest
http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/a2tk
Monitored and learning from responses to my LW content as appropriate
http://lesswrong.com/user/Clarity
Just a quick dump of what I’ve been thinking recently:
A train of thought is a sequence of thoughts about a particular topic that lasts for some time, which may produce results in the form of decisions and updated beliefs.
My work, as a technical co-founder of a software company, essentially consists of riding the right trains of thought and documenting decisions that arise during the ride.
Akrasia, in my case, means that I’m riding the wrong train of thougt.
Distraction means some outside stimulus that compels my mind to hop to a different train of thought that my mind is currently riding or should be riding. The stimuli can be anything: people talking to me, a news story, a sexually attractive person across the street, an advertisement, etc.
Some train rides are long: they last for hours, days or even weeks, while some are short and last for seconds or minutes. Historically, I’ve done my best work on very long rides.
Different trains of thoughts have different ‘ticket costs’. Hopping to a sex-related or a politics-related train of thought is extremely cheap. Caching a big chunk of a problem into my mind requires consciois effort, and thus the ticket is more expensive. In my case, the right trains of thought are usually expensive.
Interruptions set back the distance traveled, or, in some cases, completely reset the distance to the original departure station. Or they may switch me to a different train of thought completely, while, at the same time, depleting the resource (willpower?) that I need for boarding the correct train of thought.
My not-so-recent decision to stop reading peoplenews has greatly reduced the number and severity of unwanted / involuntary train hops.
My “superfocus periods”, during which I’m able to ride a single right train of thought for multiple days or weeks, are mostly due to the absence of stimuli that compel my mind to jump to different, cheaper trains of thought. These periods happen when I’m away from work and sometimes from my family, which means I can safely drop my everyday duties such as showing up in the office, doing errands, replying emails, meeting people etc.
Keeping a detailed work diary is tremendously helpful for re-boarding the right train of thought after severe interruptions / “cache wipes”. I use Workflowy.
I noticed that I’m reluctant about boarding long rides when I expect interruptions during the ride. Recent examples include reluctance about reading Bostrom’s Superintelligence at home, or reluctance about ‘loading’ a large piece of project into my head at work, because my office iss full of programmers that ask (completely legitimate) questions about their current tasks.
It’s often entertained on LessWrong that if we live in some sort of a big world, then conscious observers will necessarily be immortal in a subjective sense. The most familiar form of this idea is quantum immortality in the context of MWI, but arguably a similar sort of what I would call ‘big world immortality’ is also implied if, for example, we live in another sort of multiverse or in a simulation.
It seems to me that big world scenarios are well accepted here, but that a lot of people don’t take big world immortality very seriously. This confuses me, and I wonder if I’m missing something. I suppose that there are good counterarguments that I haven’t come across or that haven’t actually been presented yet because people haven’t spent that much time thinking about stuff like this. The ones I have read are from Max Tegmark, who’s stated that he doesn’t believe quantum immortality to be true because death is a gradual, not a binary process, and (in Our Mathematical Universe) because he doesn’t expect the necessary infinities to actually occur in nature. I’m not sure how credible I find these.
So, should we take big world immortality seriously? I’d appreciate any input, as this has been bothering me quite a bit as of late and had a rather detrimental effect on my life. Note that I’m not exactly very thrilled about this; to me, this kind of involuntary immortality, that nevertheless doesn’t guarantee that anyone else will survive from an observers point of view, sounds pretty horrible. David Lewis presented a very pessimistic scenario in ‘How Many Lives Has Schrödinger’s Cat’ as well.
Whether or not we take it seriously doesn’t seem to have any effect on how we should behave as far as I can tell, so what would taking it seriously imply?
I mostly wanted to hear opinions on whether to believe it or not. But anyways, I’m not so sure that you’re correct. I think we should find out whether big world immortality should affect our decisions or not. If it is true then I believe that we should, for instance, worry quite a bit about the measure of comfortable survival scenarios versus uncomfortable scenarios. This might have implications regarding, for example, whether or not to sign up for cryonics (I’m not interested in general, but if it significantly increases the likelihood that big world immortality leads to something comfortable, I might) or worrying about existential risk (from a purely selfish point of view, existential risk is much more threatening if I’m guaranteed to survive no matter what, but from my point of view no one else is, than in the case where it’s just as likely to wipe me out as anyone else).
If you’re going to worry about things like that if big world immortality is true, you can just worry about them anyway, because the only thing that you will ever observe (even if big world immortality is false) is that you always continue to survive, even when other people die, even from things like nuclear war.
Your observations will always be compatible with your personal immortality, no matter what the truth is.
Well, sort of, but I still think there is an important difference in that without big world immortality all the survival scenarios may be so unlikely that they aren’t worthy of serious consideration, whereas with it one is guaranteed to experience survival, and the likelihood of experiencing certain types of survival becomes important.
Let’s suppose you’re in a situation where you can sacrifice yourself to save someone you care about, and there’s a very, very big chance that if you do so, you die, but a very, very small chance that you end up alive but crippled, but the crippled scenarios form the vast majority of the scenarios in which you survive. Wouldn’t your choice depend at least to some degree on whether you expect to experience survival no matter what, or not?
Some tangential food for thought: My grandfather died recently after a slow and gradual eight-year decline in health. He suffered from a kind of neurodegenerative disorder with symptoms including various clots and plaques in his brain that gradually increased in size and number while the functioning proportion of his brain tissue decreased.
During the first year he had simple forgetfulness. In the second year it progressed to wandering and excessive eating. It then slowly progressed to incontinence, lack of ability to speak, and soon, lack of ability to move. During his final three years he was entirely bedridden and rarely made any voluntary motor movements even when he was fully awake. His muscle mass had decreased to virtually nothing. During his last month he could not even perform the necessary motor movements to eat food and had to go on life support. When he finally did die, many in the family said it didn’t make any difference because he was already dead. I was amazed that he held out as long as he did; surely his heart should have given out a long time ago.
Was I a witness to his gradual dissolution in a sequence of ever-increasingly-unlikely universes? Maybe in some other thread he had a quick and painless death. Maybe in an even less likely thread, he continued declining in health to an even less likely state of bodily function.
Well, that’s just sad. But I suppose you should believe that you witnessed a relatively normal course of decline. In more unlikely threads there possibly were quick and painless deaths, continuing declining, and also miraculous recoveries.
I guess the interesting question your example raises, in this context, is this: is there a way to draw a line from your grandfather in a mentally declined state to a state of having miraculously recovered, or is there a fuzzy border somewhere that can only be crossed once?
It seems to me that a disease that inflicts gross damage to substantial volumes of brain pretty much destroys the relevant information, in which case there probably isn’t much more line from “mentally declined grandfather” to “miraculously restored grandfather” than from “mentally declined grandfather” to “grandfather miraculously restored to someone else’s state of normal mental functioning” (complete with wrong memories, different personality, etc.).
I consciously will myself to believe in big world immortality, as a response to existential crises, although I don’t seem to have actual reasons not to believe such besides intuitions about consciousness/the self that I’ve seen debated enough to distrust.
So did I understand correctly, believing in big world immortality doesn’t cause you an existential crisis, but not believing in it does?
Yes—I mean existential crisis in the sense of dread and terror from letting my mind dwell on my eventual death, convincing myself I’m immortal is a decisive solution to that insofar as I can actually convince myself. I don’t mind existence being meaningless, it is that either way, I care much more about whether it ends.
So you’re not worried that it might be unending but very uncomfortable?
I think it should be taken seriously, in the sense that there is a significant chance that it is true. I agree that Less Wrong in general tends to be excessively skeptical of the possibility, probably due to an excessive skepticism-of-weird-things in general, and possibly due to an implicit association with religion.
However:
1) It may just be false because the big world scenarios may fail to be true. 2) It may be false because the big world scenarios fail to be true in the way required; for example, I don’t think anyone really knows which possibilities are actually implied by the MWI interpretation of quantum mechanics. 3) It may be false because “consciousness just doesn’t work that way.” While you can argue that this isn’t possible or meaningful, it is an argument, not an empirical observation, and you may be wrong. 4) If it’s true, it is probably true in an uncontrollable way, so that basically you are going to have no say in what happens to you after other observers see you die, and in whether it is good or bad (and an argument can be made that it would probably be bad). This makes the question of whether it is true or not much less relevant to our current lives, since our actions cannot affect it. 5) There might be a principle of caution (being used by Less Wrong people). One is inclined to exaggerate the probability of very bad things, in order to be sure to avoid them. So if final death is very bad, people will be inclined to exaggerate the probability that ordinary death is final.
Of all the things LW has been accused of, this is the first time I see a skepticism-of-weird-things in general being attributed to the site.
While a valid point, LW does have a shut-up-and-just-believe-the-experts wing.
Regarding one, two and three: shouldn’t we, in any case, be able to make an educated guess? Am I wrong in assuming that based on our current scientific knowledge, it is more likely true than not? (My current feeling is that based on my own understanding, this is what I should believe, but that the idea is so outrageous that there ought to be a flaw somewhere.)
Two is an interesting point, though; I find it a bit baffling that there seems to be no consensus about how infinities actually work in the context of multiverses (“infinite”, “practically infinite” and “very big” are routinely used interchangeably, at least in text that is not rigorously scientific).
Regarding four, I’m not so sure. Take cryonics for example. I suppose it does either increase or decrease the likelihood that a person ends up in an uncomfortable world. Which way is it, and how big is the effect? Of course, it’s possible that in the really long run (say, trillions of times the lifespan of the universe) it doesn’t matter.
Regarding five, I guess so. Then again, one might argue that big world immortality would itself be a ‘very bad thing’.
UN climate reports are increasingly unreadable
I’ve been thinking about some of the issues with CEV. It’s come up a few times that humanity might not have a coherent, non-contradictory set of values. And the question of how to come up with some set of values that best represents everyone.
It occurs to me that this might be a problem mathematicians have already solved, or at least given a lot of thought. In the form of voting systems. Voting is a very similar problem. You have a bunch of people you want to represent fairly, and you need to select a leader that best represents their interests.
My favorite alternative voting system is the Condorcet Method. Basically it compares each candidate in a 1v1 election, and selects the candidate that would have won every single election.
It is possible for there not to be a Condorcet winner. If the population has circular preferences. Candidate A > Candidate B > C > A… Like a rock paper scissors thing.
To solve this there are a number of methods developed to select the best compromise. My favorite is Minimax. It selects the candidate who’s greatest loss is the least bad. I think that’s the most desirable way to pick a winner, and it’s also super simple.
There are some differences. Instead of a leader, we want the best set of values and policies for the AI to follow. And there might not be a finite set of candidates, but an infinite number of possibilities. And actually voting might be impractical. Instead an AI might have to predict what you would have voted, if you knew all the arguments and had much time to think about it and come to a conclusion. But I think it can still be modeled as a voting problem.
Now this isn’t actually something we need to figure out now. If we somehow had an FAI, we could probably just ask it to come up with the most fair way of representing everyone’s values. We probably don’t need to hardcode these details.
The bigger issue is why would the person or group building the FAI even bother to do this? They could just take their own CEV and ignore everyone elses. And they have every incentive to do this. It might even be significantly simpler than trying to do a full CEV of humanity. So even if we do solve FAI, humanity is probably still screwed.
EDIT: After giving it some more thought, I’m not sure voting systems are actually desirable. The whole point of voting is that people can’t be trusted to just specify their utility functions. The perfect voting system would be for each person to give a number to each candidate based on how much utility they’d get from them being elected. But that’s extremely susceptible to tactical voting.
However with FAI, it’s possible we could come up with some way of keeping people honest, or peering into their brains and getting their true value function. That adds a great deal of complexity though. And it requires trusting the AI to do a complex, arbitrary, and subjective task. Which means you must have already solved FAI.
If I were God of the World, I would model the problem as more of a River Crossing Puzzle. How do you get things moving along when everyone on the boat wants to kill each other? Segregation! Resettling humanity mapped over a giant Venn diagram is trivial once we are all uploaded, but it also runs into ethical problems; just as voting and enacting the will of the majority (or some version thereof) is problematic, so is setting up the world so that the oppressor and the oppressee will never be allowed meet. However, in my experience people are much happier with rules like “you can’t go there” and much less happier with rules like “you have to do what that guy wants”. This is probably due to our longstanding tradition of private property.
This makes some assumptions as to what the next world will look like, but I think that it is a likely outcome—it is always much easier to send the kids to their rooms than to hold a family court, and I think a cost/benefit analysis would almost surely show that it is not worth trying to sort out all human problems as one big happy group.
Of course, this assumes that we don’t do something crazy like include democracy and unity of the human race as terminal values.
This puts me in mind of Eliezer’s “Failed Utopia #4-2”.
Not quite.
The local population consists of 80% blue people and 20% orange people. For some reason, the blue people dislike orange people. A blue leader arises who says “We must kill all the orange people and take their stuff!” Well, it’s an issue, and how do people properly decide on a policy? By voting, of course. Everyone votes and the policy passes by simple majority. And so the blue people kill all the orange people and take their stuff. The end.
This is exactly the type of problems that mathematicians have tried to solve with different voting schemes. One recent example that has the potential to solve this problem is quadratic vote buying, which takes into account strong preferences of minorities.
I am not sure this is a mathematical problem. Generally speaking, giving a minority the veto power trades off minority safety against government ability to do things. In the limit you have decision making by consensus which has obvious problems.
What do you buy votes with? Money? Then it’s an easy way for the blue people to first take orange people’s stuff and then, once the orange people run out of resources to buy votes with, to kill them anyway.
That’s precisely why it is a mathematical problem… you need to quantify the tradeoffs, and figure out which voting schemes maximize different value schemes and utility functions. Math can’t SOLVE this problem because it’s a ought problem, not an is problem.
But you can’t answer the ought side of things without first knowing the is side.
In terms of quadratic vote buying, money is only one way to do it, another is to have an artificial or digital currency just for vote buying, for which people get a fixed amount for the year.
I don’t think your concept of it really makes sense in the context of modern government with a police force, international oversight, etc. All voting schemes break down when you assume a base state of anarchy—but assuming there’s already a rule of law in place, you can maximize how effective those laws are (or the politicians who make them) by changing your voting rules.
Ahem.
I would be quite interested to learn who exerts “international oversight” over, say, USA.
Besides, are you really saying a “modern” government can do no wrong??
I’m sorry, I’m not talking about the executive function of the government which merely implements the laws, I’m talking about the legislative function which actually makes the laws. There is no assumption of the base state of anarchy.
This isn’t helpful. There’s nothing for me to respond to.
The UN (specifically, other very powerful countries that trade with the US).
Would a historical example of what you’re talking about be the legality of slavery?
Let me unroll my ahem.
You claimed this is a mathematical problem, but in the next breath said that math can’t solve it. Then what was the point of claiming it to be a math problem in the first place? Just because dealing with it involves numbers? That does not make it a math problem.
LOL. Can we please stick a bit closer to the real world?
Actually, the first example that comes to mind is the when the US decided that all Americans who happen to be of Japanese descent and have the misfortune to live on the West Coast need to be rounded up and sent to concentration, err.. internment camps.
Problems can have a mathematical aspect without being completely solvable by math.
I’m not sure that’s a fair problem to ascribe to voting. If >50% of that populace wants to kill the orange folks its going to happen, however they select their leaders. It isn’t voting’s fault that this example is filled with maniacs.
But maybe that’s the correct outcome? If 80% of the population truly believes that some people should die, maybe they should. What higher authority can we appeal to?
I’m not saying I think minorities should die. But I also don’t think the majority thinks that either. So it’s just an absurd hypothetical. You could say the same thing about CEV in general. “We shouldn’t take the utility function of humanity, because what if it’s bad?” Bad according to what? What higher utility function are we using to determine badness? Some individual’s?
I think Condorcet voting is the best way to compromise between a lot of different people’s values. It tends to favor moderates and compromises. Especially the Minimax method i mentioned.
I don’t think this system is great, I just think it’s the best we can possibly do.
You’ll have to convince me that taking other people’s utility function into account is consistent with my utility function.
It’s not. I literally discussed that in my first comment. If you can become dictator, it’s definitely in your interest to do so. Instead of turning power over to a democracy.
But I would much rather live under a democracy than a dictatorship where I’m not dictator.
Really?
I wonder if you heard the word “genocide” before. Not in the context of hypotheticals, but as a recurring feature of human history.
That’s not an argument.
If 80% of the population has a certain value, how can you say that value is wrong? Statistically you are far more likely to be in that 80%.
And the alternative isn’t “you get to be dictator and have all your values maximized without compromise”. It’s “some random individual is picked from the population and gets his values maximized over everyone else’s.” Democracy of values is far preferable.
By functioning democracies? With a perfectly rational and informed population?
That’s the important part of CEV, or at least my interpretation of it. The AI predicts what you would decide, if you knew all the relevant information and had plenty of time to think about it. I’m not suggesting a regular democracy where the voters barely know anything.
Indeed, it is not. The question mark at the end might indicate that it is a question.
I don’t see any problems with this whatsoever. I am not obligated to convert to the values of the majority. What is the issue that you see?
There is a bit of a true Scotsman odor to this question :-) but let me point out my example upthread and ask you whether the Nazi party came to power democratically.
At this level you might as well cut to the chase and go straight to “I wish for you to do what I should wish for”. No need to try to tell God… err.. AI how to do it.
And they aren’t obligated to convert to your values. Not everyone can have their way! Democratic voting is the fairest way of making a decision when people can’t agree.
Yes I know it’s No-True-Scotsman-y, but I really believe that a totally informed population would make very different decisions than an angry mob during a war and depression.
And even your examples are not convincing. Internment during wartime wasn’t anywhere near the level of genocide. And the Nazi election was far from fair:
.
Well I did mention that in my first comment. This is more of an aesthetic thing to talk about. Once we have an AI we can just ask it how to solve this problem.
But I still think it’s somewhat important to think about. Because if we go with your solution, we just get whatever the creator of the AI wants. He becomes supreme dictator of the universe forever, and forces his values on everyone for eternity. I would much rather have CEV or something like it.
That sounds like an article of faith.
“Fair” is a very… relative world. Calling something “fair” rarely means more than “I like / approve of it”.
Ah. Well, speaking aesthetically, I find the elevation of mob rule to be the ultimate moral principle ugly and repugnant. Y’know, de gustibus ’n’all...
I don’t believe I proposed any.
Well see my edit to my first comment. I’ll paste it here:
Do you agree that the fairest system would be to combine everyone’s utility functions and maximize them? Of course somehow giving everyone equal weight to avoid utility monsters and other issues. I think these issues can be worked out.
If so, do you agree that voting systems are the best compromise when you can’t just read people’s utility functions? And need to worry about tactical voting? Because that is basically what I was getting at.
If you don’t agree to the above, then I don’t understand your objection. CEV is about somehow finding the best compromise of all humans’ utility functions. About combining them all. All I’m talking about is more concrete methods of doing that.
Anything you can do maximizes some combination of people’s utility functions. So it is trivially true that the fairest system is a system which uses some combination of people’s utility functions. Unless you can first describe how you are going to avoid utility monsters and other perils of utilitarianism, you really haven’t said anything useful.
No, I do not. I do not think that humans have coherent utility functions. I don’t think utilities of different people can be meaningfully combined, too.
Ah, yes, the famous business plan of the underpants gnomes...
No, I do not. They might be best given some definitions of “best” and given some conditionals, but they are not always best regardless of anything.
What makes you think it is possible?
Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?
Thanks to Turing completeness, there might be many possible worlds whose basic physics are much simpler than ours, but that can still support evolution and complex computations. Why aren’t we in such a world? Some possible answers:
1) Luck
2) Our world has simple physics, but we haven’t figured it out
3) Anthropic probabilities aren’t weighted by simplicity
4) Evolution requires complex physics
5) Conscious observers require complex physics
Anything else? Any guesses which one is right?
Other answers I’ve considered:
o) Simpler universes are more likely, but complicated universes vastly outnumber simple ones. It’s rare to be at the mode, even though the mode is the most common place to be.
p) Beings in simple universes don’t ask this question because their universe is simple. We are asking this question, therefore we are not in a simple universe.
2′) You don’t spend time pondering questions you can quickly answer. If you discover yourself thinking about a philosophy problem, you should expect to be on the stupider end of entities capable of thinking about that problem.
n) The world is optimized for good theatre, not simplicity.
My guess is #2.
I’m of the opinion that there isn’t going to be a satisfactory answer. It’s true that the complexity of our universe makes it more likely that there’s some special explanation, but sometimes things just happen. Why am I the me on October 21, and not the me on some other day? Well, it’s a hard job, but someone’s got to do it.
That’s #1. It would be good to know exactly how lucky we got, though.
How do #1 and #3 differ? I think both are “yes, there are many such worlds—we happen to be in this one”.
It doesn’t sound impossible that anthropic probabilities are weighted by simplicity and we’re lucky.
Hmm. I think “we’re lucky” implies “probabilities are irrelevant for actual results”, so it obsoletes #3.
I think “we’re lucky” vs “simplicity is irrelevant” affects how much undiscovered complexity in physics we should expect.
what is the future of electronics technicians? Are they a good career choice? Will their skills quickly begun obsolete due to coming hardware changes?
I think the general consensus is robot-automation. seems like a task that can be done by robot...
This blog post gives a good summary of why much of the criticism of phlogiston as a bad hypothesis is not justified.
Do people take advantage of instant run-off voting to “not throw away their vote”?
What do they do in Australia? Where else do people have such systems? I suppose I could just look up Australia, but I fear it might be hard to interpret and I’d rather hear from someone with experience of it.
I ask because the recent British Labour leadership election was very different from the last. I suspect that there was a substantial portion of the electorate who preferred, say, Abbot in 2010, but didn’t vote for her because she was not viable. The whole complicated system exists to allow people to simply express their preferences and not put in the strategic voting effort of determining who is viable, but maybe it isn’t doing much.
(It is definitely doing something. In 2010, 28% of the vote share went to non-viable candidates. A plurality system applied to those first round votes would have chosen David over Ed.)
As an Australian I can say I’m constantly baffled over the shoddy systems used in other countries. People seem to throw around Arrow’s impossibility theorem to justify hanging on to whatever terrible system they have, but there’s a big difference between obvious strategic voting problems that affect everyone, and a system where problems occur in only fairly extreme circumstances. The only real reason I can see why the USA system persists is that both major parties benefit from it and the system is so good at preventing third parties from having a say that even as a whole they can’t generate the will to fix it.
In more direct answer to your question, personally I vote for the parties in exactly the order I prefer them. My vote is usually partitioned as: [Parties I actually like | Major party I prefer | Parties I’m neutral about | Parties I’ve literally never heard of | Major party I don’t prefer | Parties I actively dislike]
A lot of people vote for their preferred party, as evidenced by more primary votes for minor parties. Just doing a quick comparison, in the last (2012) US presidential election only 1.74% of the vote went to minor candidates, while in the last Australian federal election (2013) an entire 21% of the votes went to minor parties.
Overall it works very well in the lower house.
In the upper house, the whole system is so complicated no-one understands it, and the ballot papers are so big that the effort required to vote in detail prevents most people from bothering. In the upper house I usually just vote for a single party and let their preference distribution be automatically applied for me. Of course I generally check what that is first, though you have to remember to do it beforehand since it’s not available while you’re voting. Despite all that though, it’s a good system I wouldn’t want it replaced with anything different.
This, I hear.
That’s the result of compulsory voting not of preference voting.
More spam: someone called “lucy” is posting identical nonsense about vampires to multiple threads. Less obnoxious than denature123 yesterday, but still certainly spam.
(I’ve been assuming that, given that we have multiple moderators, it’s better to post comments like this than to PM one or more individual moderators. I will be glad of correction from actual moderators if some other approach is better.)
I must admit, had lucy managed to only post the vampire ads in threads about interventions to increase longevity / social skills / etc, I might have considered them worth keeping around for entertainment value. At least then we could use them as an excuse to discuss how blood transfusions from healthy donors affects various quality-of-life factors.
(I wonder how long before someone tries to start a business based around selling healthy blood / fecal transplants / etc, and how long before the FDA tells them to stop before they sell someone diseases.)
Yo, mods! A mop and a bucket are needed at the forums to clean up after some script!
‘Zeno effect’ verified: Atoms won’t move while you watch
...
...
There is significant progress in genetic modification of humans and in physical modification/augmentation of humans. It is plausible we will have genetically modified and/or physically modified human intelligence before we have artificial intelligence.
FAI is the pursuit of artificial intelligence constrained in a way that it will not be a threat to unmodified humans. Or at least that is what it seems to be to me as an observer of discussions here, is this a reasonable description of FAI?
It occurs to me that natural human intelligence has certainly not developed with any such constraints. Indeed, if humanity can develop UAI, then that is essentially proof that human intelligence is not Friendly in the sense we wish FAI to be.
Presumably we have been more worried with how to constrain AI to be friendly because AI could learn to self-modify and experience exponential growth and thus overwhelm human intelligence. But what of modified human intelligence, genetic or physical? These ARE examples of self-modification. And they both appear to be capable of inducing exponential growth.
Is the threat from unfriendly human intelligence any less or any different, or worthy of consideration as an existential risk? If an intelligence arises from modified human, is it a threat to unmodified human, or an enhancement on it? How do we define natural and artificial when our purpose in defining it is to protect the one from the other?
Human intelligence has already chosen to maximize the burning of oil with no regard for the viability of our biosphere, so we’re already living under an Unfriendly Human Intelligence scenario.
Bostrom discusses this possibility in Superintelligence, both in the form of enhanced biological cognition and in brain/machine interfaces. Ultimately he argues that a super intelligent singleton is more likely to be a machine than an enhanced biological brain. He argues that increases in cognitive ability should be much faster with a machine intelligence than through biological enhancement, and that machine intelligence is more scalable (I believe that he makes the point that, while a human brain the size of a warehouse is not practical, a computer the size of a warehouse is).
Well, of course it’s not. Nobody ever said it is.
Biologically, on the wetware substrate? I don’t think that’s possible. And if you mean uploads/ems, the distinction between human and AI becomes somewhat vague at this point...
Currently, I’d say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.
There is a valid question of the shape of the improvement curve, and it’s at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
No. That’s a really bad idea.
First, no one even knows what “friendliness” is. Second, I strongly suspect that attempts to genetically engineer “friendly humans” will end up creating genetic slaves.
Perhaps. Don’t both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.
As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.
I notice I boast about things without even considering if they’re things others will find impressive or shameful—like not attending class. it’s a bad habit not to exercise my consideration, empathy and/or theory of mind more. Reckon I’ve identified the right failure mode here or I’m misattributing?
I think you get it roughly right.
I recently added a quote of Dannett to the rationality quote thread that fits here:
You don’t have to communicate to everybody but it’s very worthwhile to have two way conversations about your important habits with other people. It’s important for mental sanity.
Reintroducing the most meta-concept I’m aware of: integrative complexity!
Munchkining real estate http://www.bloomberg.com/news/articles/2015-10-23/this-startup-tracks-america-s-murder-houses (I’m referring to the resellers mentioned in the article, not the actual startup covered).
Another thing I’ve heard recently, but not looked into much is living in a house boat off of the coast of San Francisco, and then paddling in on a Kayak.
MIRI’s research guide is definately overkill for interpreting it’s individual papers.
So, I have reason to believe it will be overkill for interpreting it’s technical research agenda.
Has anyone done a kind of annotation of the thing that is more amateur friendly?
Surely it’s in MIRI’s best interest to make it more accessible as to compel potential benefactors to support their research?
I’ve finally gotten to reading a bunch of MIRI papers. I don’t pretend to understand as they are meant to be understood. Can it be predicted whether a maths problem is solved, solvable, unsolved or unsolvable? I feel really...dismayed and discouraged reading through MIRI’s work. I feel as though they are trying to solve questions that cannot be solved. Though, many famous maths problem go from unsolved to solved, and I struggled with high school maths so I certainly would prefer to defer to some impressive reasoning from you, my peers at LessWrong before I abandon my support for MIRI.
You should worry more about whether MIRI’s way of doing problems is a good way of solving hard problems, not how hard the problems are.
Problem difficulty is a constant you cannot affect, social structure is a variable.
As I read through the Agenda, I can hear Anna Salamon telling me something along the lines of: if you think something is a rational course of action, the antecedents to that course must neccersarily be rational or you are wrong. She doesn’t explain it like that and I cant first that poplar thread but whatever...
Now reviewing the research agenda, there are some things which concern me about their way of doing problem solving. I’d appreciate anyone’s input, challenges, clarification and additions:
nice sound bite. No quarrel with this. Just wanted to point it out
for the same reason, I won’t delegate trust to design friendly AI up to strangers at MIRI alone ;)
this is the critical assumption behind MIRI’s approach. Is there any reason to believe this is the case?
shouldn’t establishing this be the very first item in the research agenda, before jumping in to problems they assume are solveable. In fact, the abscence of evidence for them being solveable should be evidence of absence...no?
has it been demonstrated anywhere that formalisms are optimal for exception handling?
Is this a legitimate forced choice between pure mathematics and gut level intuition + testing?
MIRI alleges a formal understanding is neccersary for robust AI control, then defines formality as follows:
So first, why aren’t they disproving Rice’s theorem?
Okay, show me some data from a very well designed experimenting suggesting theory should come first for the safe development of technology
Honestly, all the MIRI maths and formal logic fetishism got me impressed and awe struck. But I feel like their methodological integrity isn’t tight. I reckon they need some quality statisticians and experiment designers to step in. On the other hand, MIRI operates a very very good ship. They market well, fundraise well, movement build well, community build well, they design well, they write okay now (but not in the past!), they get shit done even and they bring together very very good abstract reasoners. And, they have been instrumental, through LessWrong, in turning my life around.
In good faith, Clarity, still trying to be the in-house red team and failing slightly less at it one post at a time.
Lots of this going on in the big wide world. Consider looking in more places to deal with selection bias issues.
thanks for the lead :) I’ll get on to it.
I mostly agree, but: You can affect “problem difficulty” by selecting harder or easier problems. It would still be right not to be discouraged about MIRI’s prospects if (1) the hard problems they’re attacking are hard problems that absolutely need to be solved or (2) the hardness of the problems they’re attacking is a necessary consequence of the hardness (or something) of other problems that absolutely need to be solved. But it might turn out, e.g., that the best road to safe AI takes an entirely different path from the ones MIRI is exploring, in which case it would be a reasonable criticism to say that they’re expending a lot of effort attacking intractably hard problems rather than addressing the tractable problems that would actually help.
MIRI would say they don’t have the luxury of choosing easier problems. They think they are saving the world from an imminent crisis.
They might well do, but others (e.g., Clarity) might not be persuaded.
We’ll see :)
Eh, not really. Rice’s theorem.
As any other amateur who reads Eliezer’s quantum physics sequence, I got caught up in the “why do we have the Born rule?” mystery. I actually found something that I thought was a bit suspicious (even though lots of people must have thought of it, or experimentally rejected this already.) Note that I’m deep in amateur swamp, and I’ll gleefully accept any “wow, you are confused” rejections.
Here is my suggestion:
What if the universes, that we live in, are not located specifically in configuration space, but in the volume stretched out between configuration space and the complex amplitude? So instead of talking about “the probability of winding up here in configuration space is high, because the corresponding amplitude is high”, we would say “the probability of winding up here is high, because there are a lot of universes here”. And here, would mean somewhere on the line between a point in configuration space and the complex amplitude for that point. (All these universes would be exactly equal.) And then we completely remove the Born rule. Of course someone thought of this, but responds: “But if we double the amplitude in theory, the line becomes twice as long, and there would be twice as many universes. But this is not what we observe in our experiments, when we double the amplitude, the probability of finding ourselves there multiplies by four!” This is true, if you study a line between the complex amplitude peak and a point in configuration space. But you are never supposed to study a point in configuration space, you are supposed to integrate over a volume in configuration space.
Calculating the volume between the complex amplitude “surface” and the configuration space, is not like taking all the squared amplitudes of all points of the configuration space and summing them up. The reason is that, when we traverse the space in one direction and the complex amplitude changes, the resulting volume “curves”, causing there to be more volume out near the edges (close to the amplitude peak) and less near the configuration space axis.
Take a look at the following image (meant to illustrate an “amplitude volume” for a single physical property): [http://www.wolframalpha.com] , type in: ParametricPlot3D {u Sin[t], u Cos[t], t / 5}, {t, 0, 15}, {u, 0, 1}
Imagine that we’d peer down from above, looking along the property axis. If we completely ignore what happens in the view direction, the volume (the blue areas) would have the shape of circles. If we’d double the amplitude, the volume from this perspective would be quadrupled.
But as it is, what happens along the property axis matters. The stretching out causes the volume to be less than the amplitude squared. It seems that, the higher the frequency is, the closer the volume is to have a square relationship with the amplitude, while as the frequency lowers, the volume approaches a linear relationship with the frequency. Studying the two extreme cases; with frequency 0 the geometric object would be just a straight plane, with an obvious linear relationship between amplitude and volume, while with an “infinite” frequency, the geometric object would become a cylinder, with a squared relationship between volume and amplitude. This means that the overall current amplitude-configuration-space ratio is important, but as far as I know, it is unknown to us.
In a laboratory environment, where all frequencies involved are relatively low, we would see systems evolving linearly. But when we observe the outcome of the systems, and entangle them with everything else, what suddenly matters is the volume of our combined wave which has a very very high frequency.
Or does it? At this point I’m beginning to lose track and the questions starts piling up.
What happens when multiple dimensions are mixed in? I’m guessing that high-frequency/high-amplitude still approaches a squared relationship from amplitude to volume, but I’m not at all certain.
What happens over time as the universe branches, does the amplitude constantly decrease while the length and frequencies remain the same? (Causing the relationship to dilute from squared to linear?)
Note that this suggestion also implies that there really exists one single configuration space / wave function that forms our reality.
So, what do you think?
At least one of us is confused about this post :P
It seems like what you’re doing is strictly more complicated than just doubling the number of dimensions in state-space and using those extra dimensions only so you can say the amount of “stuff” goes as amplitude squared. Which is already very unsatisfying.
I’m really confused where frequency is supposed to come in.
It’s most likely me being confused.
My picture of it right now is that all the dimensions you need in total, are all the dimensions in state-space + 2 dimensions for the complex amplitude. If this assumption is wrong, then we have found the error in my thinking already!
Note that the two complex amplitude dimensions are of course not like the other dimensions. For every position in the state-space, there is a single point in the amplitude dimensions. Or in my suggestion, a line from origo out to the calculated complex value.
Don’t try to think this through with matrices, there’s a very real chance that what I’m after cannot be captured by matrices at all. I think you have to do a complete geometric picture of it.
How do you feel about floating posts in the Discussion section?
Like: electing a few threads that stay at top for the month/week they are active, the open threads, the monthly media thread, etc.
Is that even possible with LW code?
Is the LW code open source? If not, why not ? Is it a fork of the reddit code ? Can we update the reddit specific code ? (Reddit allows sticky posts since 2013 if I’m not mistaken) Who takes care of the site ?
Yes, the code is open source. Yes, it’s a fork of the reddit code. TrikeApps takes care of the website as a volunteer effort to help MIRI. But LW isn’t a high priority for either of them.
The code is on Google Code: https://code.google.com/p/lesswrong/
The github link on the source code page is dead but I managed to find this. The code hasn’t been touched since 2009. Would it be possible that the same iteration of the code is the one that currently powers lesswrong ?
https://github.com/jsomers/lesswrong
I have a few questions and think the guys at LW probably can help. I’m not sure LW is the best place to ask this, but I don’t really know any other place.
Many people (politicians, famous, or what-have-you) have a website and have a “contact” page. How can I write a message that will have an impact? I’m assuming that:
They receive a large volume of email and may not respond or even read it;
The mail may not be delivered to them; maybe they have someone else to take care of it for them.
Those are the things that pop out of my head right now, anything else I should double-check?
Now, if those were the preperations, now we have to get the actual cooking done. How can you make an impactful message? Something that will definitely get their attention, something they might just start thinking in the middle of the day. Something that will make them stare at the screen and make them seriously think about it. Most important of all, something that gets them to reply, and a good reply that can make the exchange continue.
I’m willing to put significant effort into this, so don’t be afraid to recommend a book or two, or three.
In the usual way: offer them something they want.
Leaving sex aside, the traditional things are money and power. Impactful letters begin like this: “I { control a large voting block | can direct cash flow from a network of donors } and would like to discuss X with you”. Oh, and, of course, impactful letters are NOT sent to the “contact page” address.
My first impulse is that it’s worthwhile to focus on actual substance instead of focusing on trying to engage a politician for the sake of influencing a politician.
The second step is making sure that you don’t as appear as clueless as the average person who writes the politician. Actually try to understand the positions of the stakeholder in the debate you want to comment on and what the issue is about from the view of the politician.
Third would be to have a role in the debate. You can act as a member of an NGO. You can be a blogger. Failing that you could be a person who edited the Wikipedia page of the politician and who tries to understand the policy of the politician better.
The standard way lobbyists use to get a politicians interest is also to give them campaign donations.
How well can you disambiguate someone’s notes to the self? I’d like to calibrate my powers of mentalisation!
Here are some hypothetical goals someone may have. For those that are a unclear or odd, would you like propose what you think they may really be saying and how you came to that conclusion. Can you even infer which of them mean what they say and which mean something secret?
I’ll give you feedback, cause I generated and obfuscated the writing myself!
Complete remaining non E3 non research units
Fight with the lions after touring Turkey
Apply for PhD programs in Norway, Germany or UK
Network with intelligent Africans
Get married and have kids in the Baltic countries
Bush tucker tour in south and central Australia
hair silver grey then blue
Buy €M
Investigate Colombian prostitution tolerance zones then Investigate post auc criminal groups (see wiki) networks, organisational structures and psyops & the office of the high counsellor for reintegration.
Not really sure where to ask but is anyone in contact with Dahlen? We’ve had a cool discussion but it stopped abruptly and they haven’t posted anything for a while nor replied to PMs.
What website would you suggest for looking into medical research, for someone who’s not versed in reading medical literature? I’m specifically looking for any developments or studies into the treatment of urethral strictures for my own reference.
The Mayo clinic provides good introductionary descriptions: http://www.mayoclinic.org/diseases-conditions/urethral-stricture/basics/definition/con-20037057
But even if you are not versed in reading, if you want to learn about new developments read original papers. If there are obstacles there a LW help desk
Final Kiss of Two Stars Heading for Catastrophe
I have a Gmail, Google Drive, Google Calendar, Facebook and Facebook Messenger apps on my mobile (iphone).
Can I streamline (reduce the number of) my apps without losing functioning?
This sounds like an XY problem—what are you trying to achieve by reducing the number of apps?
XY? What does that refer to? Female chromosomes?
Trying to reduce decision fatigue and streamline my time management. Spending lots of time looking at apps lately.
http://xyproblem.info/
It’s a terrible name, but we seem to be stuck with it.
You can probably do most of what the Facebook app lets you do in safari. You can add a Google calendar to the stock ios calendar app.
You might be able to text whoever you message with messenger, or just use the website.
Gmail can likewise be set up with the stock mail app.
The only app you really need is google drive.
In my tablet, I use all those (including the Facebook ones) through Google Chrome. I don’t miss the apps at all.
Attention everyone excellent, in one way or another!
What are the determinants of success for an amateur on their path to expertise in an area you are exceptional at that isn’t already described accurately on Lesswrong?
Describe a rough estimate of the variance in the success of entrants to the field that can be attributed to each determinant you identify?
No time for modesty now, you’re hear to teach and learn!
Ask not what LessWrong can do for you, but what you can do for LessWrong!
Is there any work on developing brain implants or similar for pain moderation in case of sudden injury and you want to down regulate pain so you can think clearly and get help and function?
I don’t feel safe knowing I have to wait for an ambulance to get access to serious pain killers and such.
It’s probably best, since they are often harmful and liable to abuse, but surely someone is working on solving these issues.
The human brain is quite capable of shutting down pain without any implants provided you train that ability. No implants needed.
Can you guide me down this rabbit hole?
Dave Elman has a well known process for shutting down pain via hypnosis. I know two people face to face where I know they got their wisdom teeth drawn while shutting off the pain themselves via self-hypnosis.
In CFAR lingo, pain is a very strong signal from system 1 and the fact that system 2 thinks the pain is not useful doesn’t mean that system 1 shuts it off. You actually need a very good relationship between the system 1 and system 2 to have that happen.
A good start for that is Gendlin’s focusing. Listen to the uncomfortable feelings in your body to release them. As a beginner you likely won’t release strong physical pain that way but lesser issues such as a headache can from time to time be released.
Move your locus of self to the afflicted space (it helps to close your eyes, and visualize moving your mind to the point; to practice this, if it comes difficult to you, close your eyes, and visualize flying around the room you’re in); pain vanishes while you hold it there. Returns, slightly diminished, when you relax your focus. Once you get practiced, you can split your locus of self, and direct threads of attention/self onto painful areas, which diminish with the attention.
That’s my description. Your internal descriptions may differ, and/or these instructions may not apply to you in any sense—the internal experience of a mind varies wildly from person to person.
What kind of results do you achieve with that strategy?
Pain in the area of focus fades or vanishes. I’m assuming, by the similar nature implicit between focusing on the pain, and “listening” to the uncomfortable feeling, that there’s some kind of similar action taking place there.
What was the strongest pain to which you successfully applied the technique?
A hand I had accidentally dumped boiling liquid over, although the reduction in pain wasn’t complete in that case, and it was difficult to maintain concentration. (I couldn’t make my attention… large enough? To encompass the entire hand.)
I don’t generally apply the technique, because it’s usually counterproductive; the problem with pain is that it is distracting me from what I want to pay attention to, so giving it my full attention is just making the problem worse.
You mean you have to keep up the mental concentration to keep the pain reduction?
Yes.
The Intelligence Agent’s forum looks very active. I’m glad it has taken off.
Can I solicit any reviews about anyone’s experience with it so far?
The content itself is beyond me. I’m curious whether I should refer to it intermittently while still learning MIRI’s research syllabus or whether the expectation is to have a command of everything before starting. I suspect the latter given the caliber of posts, but that may simply be the founder effect and unintended.
It just struck me...I have no idea about the room for more funding considerations for MIRI. Googling after it suggests that question hasn’t even been seriously analysed before. Surely I’m missing something...
This post discusses MIRI, and what they can do with funding of different levels.
What are you looking for, more specifically?
Why do I find nasal female voices so sexy? Even languages that emphasise nasality are sexy to me, like Chinese or French, whereas their language group companions with other linguistic similarities do not (e.g. Spanish in the case of French). Is there anything I can do to downregulate my nasal voice fetish?
Why do you want less of something you like?
Presumably “nasal voices” aren’t a terminal goal for clarity, and he’d like it to stop clouding his judgement of other characteristics that are more important in finding someone he enjoys.
Yes that’s right
I want more of something I like, but on the precondition that I want to like something I like. However, there is nothing I like that I have reason to like exclusive of all other likes so if I can like something less, all else constant, it becomes easier for me to satisfy my like for the remainder of things. Therefore, it is instrumental to my terminal goals to like goals in both strict preferences and to eliminate them in decreasing order of preference
Operant conditioning...? Like exposing yourself to some stereotypically anti-sexy stimulus when you notice a nasal voice. (Not that I expect this to have much effect, but who knows?)
has anyone tried ‘happiness meditation’?
I’ve done loving kindness meditation, which is sometimes known to induce feelings of extreme joy.
What do you want to know?
Yes. I prime myself every day in the morning with love, joy, gratitude, and the such.
Do we know whether quantum mechanics could rule out acausal between partners outside eachother’s lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the ‘Free will theorem’ https://en.wikipedia.org/wiki/Free_will_theorem .
The whole point of acausal trading is that it doesn’t require any causal link. I don’t think there’s any rule that says it’s inherently hard to model people a long way away.
Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.
Yes, I understand the point of acausal trading. The point of my question was to speculate on how likely it is that quantum mechanics may prohibit modeling accurate enough to make acausal trading actually work. My intuition is based on the fact that in general faster than light transmission of information is prohibited. For example, even though entangled particles update on each others state when they are outside of each others light cone, it is known that it is not possible transmit information faster than light using this fact.
Now, does mutually enhancing each others utility count as information, I don’t think so. But my instinct is that acausal trade protocols will not be possible do to the level of modelling required and the noise introduced by quantum mechanics.
I don’t understand. Computers are able to provide reliable boolean logic even though they’re made of quantum mechanics. And any “uncertainty” introduced by QM has nothing to do with distance. You seem very confused.
My question is simply: Do we have any reason to believe that the uncertainty introduced by quantum mechanics will preclude the level of precision in which two agents have to model each other in order to engage in acausal trade?
No. There are any number of predictable systems in our quantum universe, and no reason to believe that an agent need be anything other than e.g. a computer program. In any case “noise” is the wrong way to think about QM; quantum behaviour is precisely predictable, it’s just the subjective Born probabilities that apply.
Critical thinking is a responsibility for every intelligent agent, just as the benefaction of the critical thought of others ought to be a right for all sufferable life. Millions of hollocaust victims were at the mercy of men just following orders. Never again
I thought you were going to cite this, which shows a much higher level of critical thinking than most people can manage.
Thank you for the applause lights, but how do we give this “critical thinking” to all the intelligent agents?
We probably can’t do that even for the majority of humans in developed world.
Gotta keep the troops’ morale up too.
Not with that kind of attitude we can’t.
Imperialism and a defence of inequality in capitalist republics
Who was is that first articulated the argument that: Since money flows to those can anticipate and predict the need of others, those who get power are those who can do that and therefore if those people are caring then they are the best capable of looking after the rest
And, what strong critiques are available
I’m troubleshooting my ongoing failures in sustained, romantic relationships. My social cognition is impaired, and so too is the social cognition of autistic people. With some googling about feelings of inadequacy and asperges, I found a book that documents feelings of inadequacy in asperger’s men as they relate to relationships with women that proposes one conditional stimuli (women ‘performing’ (better in relationships) than them) to explain this phenomenon:
I appreciated this reading since it characterises a conditional that is not usually characterised in similar literature. It ampts my appreciation for primary qualitative literature in mental health. I
Another asperger like trait I have is comparable to alexthemyia but a more general deficit in self-awareness. So, if the author’s hypothesis for the conditional stimulis in some aspie’s feeling of inadequacy also explains my feeling of inadequacy, I have little or no intuitive feelings of whether that is the case. This makes troubleshooting cognitive biases, and by consequence, using REBT/CBT techniques highly inefficient for me as I have to test out every possible logical fallacy I may or may not be making against all possible corrections. The space is narrowed, of course, by knowledge of the kinds of fallacies similar people tend to make and interventions that tend to work. I wanted to type this out to better wrap my head around my theory about my very slow rate of progress in improving certain aspects of my mental health and social skills. I hope that it is useful for anyone else who is struggling with similar issues since I no of no one else similar enough to me that I can use them as a general point of reference and mentorship for multiple kinds of problems we may share.
My current attitude to relationship strategy given my asperger like relationship issues reflects the position given here that both aspie and (potential) partners can work together to have a successful relationship.
I’m working on other insecurities too, like insecurity around wearing nice clothes
If you idealize a woman and seek the perfect woman, you will appear to be performing less well than your image of the person. To avoid that effect it’s good to have a relationship with a person where you also see their flaws and you both can be open about your flaws.
I had never contemplated this perspective before you suggested it, and it’s immediately compelling. Thank you.
I’m so over getting super fascinated by someone and thinking they’re the sun and moon then talking to them more and realise they’re just human like the rest of us...agh. I’m so bad with romance haha. I don’t know how to stop idealising people who seem like perfect matches at the time, but then as I talk to them I realise their just a regular person. What can I do about this?
Value regular people?
Having a healthy relationship is about relating to another human being and not about relating to a mental ideal. If you idealize them at the beginning that’s okay. It’s typical human behavior. You also don’t have to have to commit to have a relationship for life to have a valuable relationship.
I would be interested to see a kind of survey where anyone can rate each active or interested LWer on their profficiency on the content of each sequence. It’s somewhat annoying how my interpretation of the sequence would appear to cut down on a good deal of the questions asked and reasked in discussion comments by some very active. But perhaps it’s me that’s ignorant. This could be an enlightening self awareness exercise for many of us, improve the calibre of posts by shaming people who would otherwise claim profficiency when their peers may be skeptical, and raise the sanity waterline among LW ourselves.
Based on any experiences here, in real life, would you want to meet, avoid, ignore or be indifferent to me?
You seem a very enthusiastic participant here, despite a lot of downmodding. I admire that—on here. In real life my fear would be that that translated into clinginess—wanting to come to all my parties, wanting to talk forever, and the like. (And perhaps that it reflects being socially unpopular, and that there might be a reason for that). So I’d lean slightly to avoid.
Haha, thanks for that analysis. How unexpected and insightful. Your premise is mostly correct, but your conclusion ain’t) I’m extremely clingy with a few people who I have crushes on and idealise at a given time (2 at the moment). It’s generally very short lived ~1 month and always women, haha. On the other hand, I’m quite popular with friends and acquiescence, get invited to lots of parties but rarely accept (goal oriented, ain’t got time for that!), I haven’t tried to fall in love with or done some cruel socially experiment on. On the other hand, my instinctive drive tor respond to this may be telling of some degree of insecurity about my social status...
Unsure the difference between “ignore” or “be indifferent”. I’ll treat both as “not avoid but not seek”.
My prior distribution for internet commenters for whom I don’t have other social connections is (rounded estimates) around 10% avoid, 90% indifferent and maybe 1% want to meet. LW moves much of the “avoid” into “indifferent”, and maybe quadruples “want to meet”. The few comments I’ve noticed from you specifically match my general LW impressions.
So, biggest weight on “indifferent to meeting you”. Slightly more interested than avoid-ey if I have to make an effort one way or the other.
The information we can get about you through an internet forum may never be enough for us to give an answer that will be useful to you.
Intelligence Intelligence
If AI is an existential risk, it is a national security risk
If AI is a national security risk, it is a risk intelligence agencies would be interested in
If intelligence (in the spook sense) communities are interested in a risk, they are likely to develop a formal or informal research agenda into that risk
If research agenda’s in friendly AI exist that are not MIRI’s, MIRI may be interested in accessing said research agenda
Thought MIRI’s full technical research agenda is secret, it is plausible that they are not currently collaborating with intelligence agencies
MIRI may stand to benefit from access to AI research agendas from intelligence communities
If MIRI is unable to achieve collaborations on their own, LW activists may be able to assist them
Therefore, LW activists may have an interest in ‘penetrating’ intelligence agencies to extricate their technical research agendas around AI pursuant to greater research excellence and collaboration on AI safety and control problems.
If this is in MIRI’s interest, it may be in a given rationalists interest
Rationalists with AI subject matter expertise may be interested in pursuing friendly AI research at the object level instead
Non subject matter experts may be interests in penetration with the intention of general access to an intelligence communities knowledge
Intelligence forces actively disqualify those with open curiosity about intelligence matters:
Therefore, penetrating intelligence communities for the purposes of creating greater transparency in the friendly AI research arena without AI subject matter expertise which may improve the likelihood of being assigned to AI safety specifically may be a poor use of one’s time.
Your first three bullet points seem to imply that entities like the NSA should be expected to have research programmes dedicated to things like pandemics and asteroid strikes. That seems unlikely to me; why would the NSA or CIA or whatever be the right venue for such research? The only advantage of doing it in house rather than letting organizations dedicated to health and space handle it would be if somehow there were some nation-specific interests optimized by keeping their research secret. Which seems unlikely, because if human life is wiped out by an asteroid strike or something then the distinction between US interests and PRC interests will be of … limited importance.
Now, would we expect unfriendly AI research to be any different? I can think of three ways it might be. (1) Maybe an organization like the NSA has more in-house expertise related to AI than related to asteroid strikes. (2) There aren’t large-scale (un)friendly AI research efforts out there to delegate to, whereas agencies like NASA and CDC exist. (3) If sufficiently-friendly AI can be made, it could be harnessed by a particular nation, so progress towards that goal might be kept secret. Of these, #1 might be right but I still think it unlikely that intelligence agencies have enough concentration of relevant experts to be good places for (U)FAI research; #2 is probably true but it seems like the way to fix it would be for the nation(s) in question to fund (U)FAI research if their experts say it’s worth doing; #3 might be correct, but hold onto that thought for a moment.
Jiminy. Are you seriously suggesting that an effective way to enhance AI friendliness research would be an attempt to compromise the security of national intelligence agencies? That seems more likely to be an effective way to get killed, exiled, thrown into jail for a long time, etc.
Let me at this point remind you of the conclusion a couple of paragraphs ago: if in fact there is (U)FAI research going on in intelligence agencies, it’s probably because AI is seen as a possible advantage one nation can have over another. So your mental picture at this point should not be of someone like Edward Snowden extracting information from the NSA, it should be of someone trying to smuggle secret information out of the Manhattan Project. (Which did in fact happen, so I’m not claiming it’s impossible, but it sounds like a really unappealing job even aside from petty concerns about treason etc.)
I notice that your conclusion is that for some people, attempting to breach intelligence agencies’ security in order to extract information about (U)FAI research “may be a poor use of one’s time”. I can’t disagree with this, but it seems to me that something much stronger is true: for anyone, attempting to do that is almost certainly a really bad use of one’s time.
I find it unlikely that US services have such programs without a person like Peter Thiel being aware of the existance of those programs.
You don’t get research collaboration by a strategy of treating other stakeholders in a hostile manner and thinking about penetrating them.
Things to consider when advertising:
Problem recognition Stimulus discrimination Necessity or Problem recognition...incl situational influences All potential alternatives Decision rules: Conjunctive disjunctive elimination-by-aspect, compensatory Reference group influence Status differentiation Stimulus generation
Based on my reading of the textbook Consumer behaviours, implications for marketing strategy fifth edition by Quester, yesterday
I got off on the thought of a chair with maximum sinking in to it factor
The Pink sink looks a little too commfy.
May be the hyperstimuli of the chair universe.
May not be in my best interests as may cut productivity and inhibit exercise.
On other hand, my impress others, reduce environmental stress and associated fatigue, and probably feels really super really good :)