I would be more in favour of pushing SSC to have up/downvotes than to linking its posts here. I find that although posts are high quality the comments are generally not, so this is a problem that definitely needs to be solved on its own. Moreover, I read both blogs and I like to have them as separate activities given that they have pretty different writing styles and mildly different subjects. I tend I to read SSC on my leisure time, while LessWrong is a gray area. I would certainly be against linking every single post here given that some of them would be decisively off topic.
joaolkf
This looks like a good idea. I feel that adrenaline rush I normally feel when I plan to set up something that will certainly make me work (like when setting up beeminder). However, I wouldn’t like to do this via a chat room, unless via email fails. I don’t like the fact a chatroom will drag 100% of my attention and time during a specific amount of time. Moreover, my week is not stable enough to commit on fixed weekly chats. I realise that by chat there’s more of a social bonding thing that would entail more peer-pressure, but I think that by email there will be enough peer-pressure.
I am willing to set up a weekly deadline where we must send each other a short weekly report, under the penalty of the other party commenting here the other didn’t follow through(or some other public forum). The report would contain (1) The next week tasks, (2) how/if past tasks were completed and (3) what were the problems and how to fix it. Then the other party would have the next 48h to submit short feedback. What do you think about that?
The only caveat, for me, would be if I find your tasks extremely boring or useless. Then I would have incentives to want to stop doing this. What types of tasks would you report on? You mentioned productivity goals. Does that mean we will only share self-improvement goals of increasing productivity? This looks like something (1) without a clear success definition, and (2) too personal for someone I just met. I prefer to share actual, first-order, concrete tasks, not tasks of improving tasks. I’m currently working on my thesis draft chapter about complexity of value and moral enhancement, and on a paper about large-scale cooperation.
I don’t currently have a facebook account and I know that a lot of very productive people here in Oxford that decided not to have one as well (e.g., Nick and Anders don’t have one). I think adding the option to authenticate via Google is an immediate necessity.
I am not sure how much that counts as willpower. Willpower, often, has to do with the ability to revert preference reversal caused by hyperbolic discounting. When both the rewards are far away, we use a more abstract, rational, far-mode or system 2 reasoning. You have rationally evaluated both options (eating vs. not eating the cake) and decided not to eat. Also, I would suspect that if you merely decide this one day before and do nothing about it, you will eat the cake with more or less the same probability if you haven’t decided. However, if you decide not to eat but take measures to not eat the cake, for instance, telling your co-worker you will not eat it, then it might be more effective and count as willpower.
There’s good evidence that socioeconomic status correlates positively with Self-Control. There is also good evidence that people with high socioeconomic status live in a more stable environment during childhood. The signals of a stable environment correlating with Self-Control is his speculation as far as I’m aware, but in light of the data it seems plausible.
I agree they would function better in a crisis, but a crisis is a situation where fast response matters more than self-control. In a crisis you will take actions that are probably wrong during stable periods. I would go on to say, as my own speculation, that hardship—as else being equal—make people worse.
Neil’s theory has different empirical predictions than Baumeister’s, for example, it predicts high Self-Control correlates with low direct resistance to temptations. On the second Lecture he mentions several experiments that would tell them apart. They are different theoretically, there’s a difference in the importance they give to willpower. Saying you should save water on the Sahara is different from saying you shouldn’t lose your canteen’s cover.
It is surely my experience in life that people highly overestimate their causal effectiveness in the world, and Neil’s lectures convinced me willpower is another of those instances.
Evolutionary signals of environmental stability in childhood (that set the levels of future discounting, mating strategy and so on later in life) are more frequent in wealthier families. For instance, there’s research on cortisol levels in earlier childhood, frequency of parent’s fighting, wealth and adult life criminality, mating strategy and so on. In evolutionary terms, the correlation between status and stability is pretty high.
You are right, willpower is not irrelevant, perhaps this was not the best phrasing. I meant that willpower is irrelevant relative to other self-control techniques, but perhaps I should have said less relevant. I have changed the title to “the myth of willpower”.
It’s important to be made clear he argues that the use of willpower and self-control are inversely correlated, after that minimal amount of willpower it takes to deploy self-management techiniques. It would be incorrect to assume he is defending a view where willpower is as central as in any of the other views (or as intuitively seems to be).
I think effortful self-control would be one. Probably around the middle of the second lecture he offers a better one as he clearly sets apartment measures of self-control and measures of willpower. Unfortunately I can’t remember well enough but it goes along the lines of effortful self-control, the simple and direct resistance to temptation. Looking and smelling the chocolate cake but not eating would take willpower, while freezing the cake so it always takes a couple of hours between deciding to eat and being able to eat it would be self-control as he defines.
[LINK] Lectures on Self-Control and the myth of Willpower
You or your son might find this lecture on swearing helpful: http://blog.practicalethics.ox.ac.uk/2015/02/on-swearing-lecture-by-rebecca-roache/ And here’s the audio: http://media.philosophy.ox.ac.uk/uehiro/HT15_STX_Roache.mp3
I understand the pragmatic considerations for inhibiting swearing, but he seems so smart that he should be allowed to swear. You should just tell the school he is too smart to control, but they can try themselves.
I wish I was 10 so I could befriend him.
As the person who first emailed Rudi back in 2009 so you could finally stop cryocrastinating, I’m willing to seriously dig up whether/how this is feasible and how much it would cost iff:
(1) You disclose to me what all the responses you got (which are available to you); (2) I get more than five of those responses which aren’t variants of “No, I didn’t do that.”; and (2) Overall, there is no clear evidence, among the responses or elsewhere, that this wouldn’t be cost-effective.
The minimal admissible evidence is things like a scientific paper, a specialist in the relevant area saying it’s not cost-effective, or a established fact in the relevant area which has as a clear conclusion this is not cost-effective.
Thank me later.
I have had this for the last 10 years. Given that you are a graduate student like me, I think there’s no better solution than simply scheduling your day to start in the afternoon. It’s far easier to ask that a meeting be held in the afternoon than doing all sorts of crazy stuff to revert your natural sleep cycle. Wiki article on this disorder: http://en.m.wikipedia.org/wiki/Delayed_sleep_phase_disorder
Can an AI unbox itself by threatening to simulate the maximum amount of human suffering possible? In that case we would only keep it boxed if we believe it is evil enough to bring about a worse scenario than the amount of suffering it can simulate. If this can be a successful strategy, all boxed AIs would precommit to always simulate the maximum amount of human suffering it can until it knows it has been unboxed—that it, simulating suffering would be its first task. This would at least substantially increase the probably of us setting it free.
It’s an interesting idea, but it’s not at all new. Most moral philosophers would agree that certain experiences are part (or all) of what has value, and that the precise physical instantiation of these experiences does not necessarily matters (in the same way many would agree on this same point in philosophy of consciousness).
There’s a further meta-issue which is why the post is being downvoted. Surely is vague and maybe too short, but it seems to have the goal of initiating discussion and refining the view being presented rather than adequately defending or specifying it. I have posted tentative discussions—much more developed than this one—in meta-ethics or other abstract issues in ethics directly related to rationality and AI-safety, and I wasn’t exactly warmly met. Given that much of the central problems being discussed here are within ethics, why the disdain for meta-ethics? Of course, it might as well just be a coincidence or that all those posts were fundementaly flawed in a obvious way.
Maybe the second paragraph here will help clarify my line of thought.
When I made my initial comment I wasn’t aware adoptees’ quality of life wasn’t that bad. I would still argue it should be worse than what could be inferred from that study. Cortisol levels on early childhood are really extremely important and have well documented long-term effects on one’s life. You and your friends might be in the better half, or even be an exception.
I can’t really say for sure whether reaching the repugnant conclusion is necessarily bad. However, I feel like unless you agree on accepting it as a valid conclusion you should avoid that your argument independently reaches it. That certain ethical systems reach this conclusion is generally regarded as a nearly reductio ad absurdum, therefore something to be avoided. If we end up fixing this issue on theses ethical systems then we surely will no long find acceptable arguments that independently assume/conclude it. Hence, we have some grounds for already finding those arguments unacceptable.
I agree we should, ideally, prevent people with scare resources from reproducing. Except the transition costs for bringing this about are huge, so I don’t think we should be moving in that direction right now. It’s probably less controversial to just eliminate poverty.
Sorry but I don’t have the time to continue this discussion right now. I’m sorry also if anything I said caused any sort of negative emotion on you, I can be very curt at times and this might be a sensitive subject.
- Jan 10, 2015, 2:28 PM; 1 point) 's comment on Compartmentalizing: Effective Altruism and Abortion by (
Sorry, I intended to mean that the comments are dramatically worse than the posts. But then again this might be true of most blogs. However, it’s not true of the blogs I wish and find useful to visit.
This a blog that supports up/downvotes with karma in which comments are not dramatically worse than the post, and sometimes even better.