This comment made me subscribe to your posts. I hope to read more on your attempts in the future! (no pressure)
Valdes
I felt like I should share my uninformed two cents.
Interpol seems like a promising lead, if you can get the right person at Interpol to understand the situation. I am not saying this is easy, but maybe you can get an email to be sent on your behalf on the right mailing lists (alumnis of some relevant school maybe?).
Other comments suggested getting funding from EA and that sounds fitting to me. But there is probably someone in EA that can connect you with Interpol directly. Maybe you can request to send a broad email on top of requesting funding.
I also found this hard to parse. I suggest the following edit:
Omega will send you the following message whenever it is true: “Exactly one of the following statements is true: (1) you will not pull the lever (2) the stranger will not pull the lever ” You receive the message. Do you pull the lever?
And even when the AGI does do work (The Doctor), it’s been given human-like emotions. People don’t want to read a story where the machines do all the work and the humans are just lounging around.
I am taking the opportunity to recommend the culture by Ian M. Banks here is a good entry point to the series, the books can be read in almost any order. It’s not like they find no space for human-like actors, but I still think these books show -by being reasonably popular- that there is an audience for stories about civilizations where AGI does all the work.
Of course, your original point still stands if you say “most people” instead.
I think I found another typo
I have two theses. First of all, the Life Star is a tremendous
For anyone wondering TMI almost certainly stands for “The Mind Illuminated”; a book by John Yates, Matthew Immergut, and Jeremy Graves . Full title: The Mind Illuminated: A Complete Meditation Guide Integrating Buddhist Wisdom and Brain Science for Greater Mindfulness
Thank you
As I understand it, that point feels wrong to me. There are many things that I would be sad not to have in my life but only on the vaguely long term and that are easy to replace quickly. I have only one fridge and I would probably be somewhat miserable without one (or maybe I could adapt), but it would be absurd for me to buy a second one.
I would say most of the things that I would be sad to miss and that are easy to duplicate are also easy to replace quickly. The main exception is probably data, which should indeed be backed up regularly and safely.
Could you link a source for the once a week coffee? I am intrigued.
I did not yet read your recommendations so I don’t know if the answer is there.
I read the rewrites before I read the corresponding section of the post and, without knowing the context, I find Richard’s first rewrite to be the most intuitive permutation of the three. I fully expect that this will stop once I read the post, but I thought that my particular perspective of having read the rewrites first might be relevant.
Adapted from the french “j’envisage que X” I propose “I am considering the possibility that X” or in some contexts “I am considering X”. “The plumber says it’s fixed, but I am considering he might be wrong”.
I just want to point out that the sentence you replied to starts with an “if”. “If those genes’ role is to alter the way synapses develop in the fastest growth phase, changing them when you’re 30 won’t do anything” (emphasis mine). You described this as “At first you confidently assert that changing genes in the brain won’t do anything to an adult”. The difference is important. This is in no way a comment on the object level debate. I simply think Lesswrong is a place where hypotheticals are useful and that debates will be poorer if people cannot rely on the safety that saying “if A then B” will not be interpreted as just saying “B”.
Error message: “Sorry, you don’t have access to this draft”
Makes sense and I think that’s wise (you could also think about it with other people during that time). Do you want to expand on the game-theoretic reasons?
You did, indeed, fuck up so hard that you don’t get to hang out with the other ancestor simulations, and even though I have infinite energy I’m not giving you a personal high resolution paradise simulation. I’m gonna give you a chill, mediocre but serviceable sim-world that is good enough to give you space to think and reflect and decide what you want.
And you don’t get to have all the things you want until you’ve somehow processed why that isn’t okay, and actually learned to be better.
I was with you until this part. Why would you coerce Hitler into thinking like you do about morality? Why be cruel to him by forcing him into a mediocre environment? I suppose there might be game-theoristic reasons for this. But if that’s not where you’re coming from then I would say you’re still letting the fact that you dislike a human being make you degrade his living conditions in a way that benefits no one.
I think this shows your “universal love” extends to “don’t seek the suffering of others” but not to “the only reason to hurt* someone is if it benefits someone else”.
* : In the sense of “doing something that goes against their interests”.
When I downvote a comment it is basically never because I want the author to delete that comment. I rarely downvote comments already bellow 0, but even when I do it is not because I wish the comment was deleted. Instead, it mostly means that I dislike the way in which that comment was written and thought out; that I don’t want people to have that style / approach when commenting. This correlates with me disagreeing with the position, but not strongly so; and I try to keep my opinions about the object topic to the agree/disagree voting.
I don’t know how representative I am of the Lesswrong population in that regard, but I at least think most people who downvote a comment would prefer for it to stay undeleted; if only to make past discussions legilible.
I took the survey and mostly enjoyed it. There are some questions that I skipped because my answer would be too specific and I wanted to keep the ability to speak about them without breaking anonymity.
I also skipped some questions because I wasn’t sure how to interpret certain words.
I don’t have much to add. But I think this is a very well done post, that it has a nice size in scope, and that it is about a good and useful concept.
Why not make it so there is a box “ask me before allowing other participants to publish” that is unchecked by default?
I think you’re right but I also think I can provide examples of “true” scaffolding skills:
How to pass an exam: in order to keep learning with the academic system/university/school you need to regularly do good enough at exams. That is a skill in itself (read the exam in its entirety, know when to move on, learn how hard a question is likely to be depending on the phrasing of the following questions, …) Almost everyone safely forget most of this skill once they are done studying.
Learn to understand your teacher’s feedback: many teachers, professional or otherwise, suck at communicating their feedback. You often need to develop a skill of understanding that specific individual’s feedback. Of course there is a underlying universal skill of “being good at learning how individuals give feedback”; we could think of it as the skill “being good at building a specific kind of scaffolding”.
Learn to accept humiliating defeat: A martial artist friend told me it is important at first to learn to accept losing all the time because you learn in the company of strictly better martial artists. Once you get better, you presumably lose less often.