I have taken the survey.
Caspian
One mistake is treating 95% as the chance of the study indicating two-tailed coins, given that they were two-tailed coins. More likely it was meant as the chance of the study not indicating two-tailed coins, given that they were not two-tailed coins.
Try this:
You want to test if a coin is biased towards heads. You flip it 5 times, and consider 5 heads as a positive result, 4 heads or fewer as negative. You’re aiming for 95% confidence but have to get 31⁄32 = 96.875%. Treating 4 heads as a possible result wouldn’t work either, as that would get you less than 95% confidence.
If we’re aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.
That’s not to say you couldn’t still find tricky example societies where the system evaluation isn’t doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.
Back up your data, people. It’s so easy (if you’ve got a Mac, anyway).
Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac’s internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn’t need to change anything.
I think there’s an error in your calculations.
If someone smoked for 40 years and that reduced their life by 10 years, that 4:1 ratio translates to every 24 hours of being a smoker reducing lifespan by 6 hours (360 minutes). Assuming 40 cigarettes a day, that’s 360⁄40 or 9 minutes per cigarette, pretty close to the 11 given earlier.
This story, where they treated and apparently cured someone’s cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.
Surely any prediction device that would be called “intelligent” by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like “suppose I—with my current genome—chose to smoke; then what?” and “suppose I—with my current genome—chose not to smoke; then what?”.
But it would be better if you could ask: “suppose I chose to smoke, but my genome and any other similar factors I don’t know about were to stay as they are, then what?” where the other similar factors are things that cause smoking.
In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. “Being able to predict what a user is going to do next is a key feature”
But not predicting everything they do and exactly what they’ll type.
I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.
I expect part of it’s based on status of course, but part of it could be that it would be much harder for a mugger to escape on a plane. No crowd of people standing up to blend into, and no easy exits.
Also on some trains you have seats facing each other, so people get used to deliberately avoiding each others gaze (edit: I don’t think I’m saying that quite right. They’re looking away), which I think makes it feel both awkward and unsafe.
Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?
A. Conventional economic theory says this shouldn’t happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns. On standard economic theory, improved productivity—including from automating away some jobs—should produce increased standards of living, not long-term unemployment.
You need to include inputs other than labour, and I think conventional economics allows for doing that.
Then the people who are less efficient than machines at converting the other inputs into products may become unemployed, if the machines are cheap enough.
That part of the wiki page was written in this edit
Nonlinear utility functions (as a function of resources) do not accurately model human risk aversion. That could imply that we should either change our (or they/their) risk aversion or not be maximising expected utility.
That’s not intended for people who could work but chose not to. They require you to regularly apply for employment. The applications themselves can be stressful and difficult work if you don’t like self-promotion.
I think I even have work-like play where a game stops being fun. And yes, play-like work is what I want to achieve.
Reinforcing effort only in combination with poor performance wasn’t the intent. Pick a better criterion that you can reinforce with honest self-praise. You do need to start off with low enough standards so you can reward improvement from your initial level though.
I’m interested in what you rewarded for going to bed earlier (or given the 0% success rate, what you planned to reward if it ever happened) and how/when you rewarded it. Maybe rewarding subtasks would have helped.
I just read Don’t Shoot The Dog, and one of the interesting bits was that it seemed like getting trained the way it described was fun for the animals, like a good game. Also as the skill was learnt the task difficulty level was raised so it wasn’t too easy. And the rewards seemed somewhat symbolic—a clicker, and being fed with food that wasn’t officially restricted outside the training sessions.
Thinking about applying it to myself, having the reward not be too important outside the game/practise means I’m not likely to want to bypass the game to get the reward directly. Having the system be fun means it’s improving my quality of life in that way in addition to any behaviour change.
I haven’t done much about ramping up the challenge. How does one make doing the dishes more challenging?
But I did make sure to make the rewards quicker/more frequent by rewarding subtasks.
Well, it seems we have a conflict of interests. Do you agree?
Yes. We also have interests in common, but yes.
If you do, do you think that it is fair to resolve it unilaterally in one direction?
Better to resolve it after considering inputs from all parties. Beyond that it depends on specifics of the resolution.
If you do not, what should be the compromise?
To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways and circumstances such as described by pragmatist), because they prefer that people not introduce themselves to them personally.
Several of the objections to the introduction suggest guidelines I would agree with: keep the introduction brief until the other person has had a chance to respond. Do not signal unwillingness to drop the conversation. Signaling the opposite may be advisable.
Other people (extraverts? NTs? something else?) would prefer people-in-general to adopt a policy of introducing oneself to strangers, because they prefer that people introduce themselves to them personally.
Yeah. Not that I always want to talk to someone, but sometimes I do.
Does this seem like a fair characterization of the situation?
Yes.
If so, then certain solutions present themselves, some better than others. We could agree that everyone should adopt one of the above policies. In such a case, those people who prefer the other policy would be harmed. (Make no mistake: harmed. It does no good to say that either side should “just deal with it”. I recognize this to be true for those people who have preferences opposite to my own, as well as for myself.)
I think people sometimes conflate “it is okay for me to do this” with “this does no harm” and “this does no harm that I am morally responsible for” and “this only does harm that someone else is morally responsible for, e.g. the victim”
The alternative, by construction, would be some sort of compromise (a mixed policy? one with more nuance, or one sensitive to case-specific information? But it’s not obvious to me what such a policy would look like), or a solution that obviated the conflict in the first place. Your thoughts?
Working out such a policy could be a useful exercise. Some relevant information would be: when are introductions more or less bad, for those who prefer to avoid them.
This initially seemed like it would still be very difficult to use.
I didn’t find any easier descriptions of TAPs available on lesswrong for a long time after this was written, but I just had another look and found some more recent posts that suggested a practice step after planning the trigger-action pair.
For example, here:
What are Trigger-Action Plans (TAPs)?
You can either practise with the real trigger, or practise with visualising the trigger.
There’s lots more about TAPs on lesswrong now I that I haven’t read yet but the practice idea stood out as particularly important.