Open Thread March 28 - April 3 , 2016
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Big news for visibility: Sam Harris is preparing a book co-written with Eliezer (starting at minute 51 of podcast).
Personally I think that Eliezer Yudkowsky should find a different co-author since Sam Harris isn’t related to AI or AGI in any way and I am not sure how much can he contribute.
If the book was targeting AI researchers I would agree that Harris is a poor choice. On the other hand, if the goal is to reach a popular audience, you could do much worse than someone who is very well known in the mainstream media and has a proven track record of writing best selling books.
The friendliness problem isn’t just about AGI but morality, which is something Harris studies.
Would you say there’s an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?
I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to “save” it.
One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.
I personally wouldn’t mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we’re not generating enough content locally anymore.
I’m essentially just floating this idea for now. If it’s positively received, I might take it upon myself to start posting links.
I pretty regularly post links as comments in the Open Thread.
The current norm in LW is to have few but meaty top-level posts. I think if we start to just post links, that would change the character of LW considerably, going in the Reddit/HN direction. I don’t know if that would be a good thing.
It seems to me, despite talk of change, LW is staying essentially the same… and thereby struggling at an accelerating rate to be a place for useful content.
My current modus operandi for LW is to use the LW favorite I have in place to (1) Check SSC and the other “Rationality Blogs” on the side bar, and then (2) peruse discussion (and sometimes comment) if there isn’t a new post at SSC, et al that commands my attention. I wonder if other LWers do the same? I wonder what percentage of LW traffic is “secondary” in a way similar to what I’ve described?
I like your suggestion because it is a radical change that might work. And it’s bad to do nothing if what you are doing seems to be on a trajectory of death.
At some point, during a “how can we make LW better” post on here, I mentioned making LW a de facto “hub” for the rationality blogosphere since it’s increasingly not anything else. I’m now re-saying that and seconding your idea. There could still be original content… but there is nowhere close to enough original content coming in right now to justify LW as a standalone site.
As a a data point, this is exactly how I’ve been using LessWrong for at least the last year. One of the reasons I more frequently comment in open threads is because we can have less idle conversations like this one as well :P
I first check the “rationality blogs”, and then the “Discussion”.
I suggest posting links to specific things you think are interesting with some text about what you want to discuss about them.
I think that LW as is set up now is not good for links; you need to click on the post, and then click again. I think that LW should have reddit-style linkposts, where there’s a link and then another url for the comments. (The relevant github issue.)
Rob Bensinger published Library of Scott Alexandria, his summary/”Sequences” of the historically best posts from Scott (according to Rob, that is). Scott seems to pursue or write on topics with a common thread between them in cycles of a few months. This can be observed in the “top posts” section of his blog. Sometimes I forget a blog exists for a few months, so I don’t read it, but when I do read diaspora/rationality-adjacent blogs, I consider the reading personally valuable. I’d appreciate LessWrong users sharing pieces from their favourite blogs that they believe would also appeal to many users here. So, to make a top-level post linking to several articles from one author once in a while, sharing their best recent posts which would be relevant to LessWrong’s interests, seems reasonable. I agree making a top-level post for any one or all links from a separate blog would be too much, and that this implicit norm should continue to exist.
I think today the norm is providing a short summary of the link target. (If I must click on the link to find out why you linked it, that’s almost guaranteed downvote.)
But I could imagine having a “subreddit” consisting of links only, where the norm would be different.
And of course, links to lower-quality articles can still be downvoted.
Unless there’s some novel point in the post, or reason to discuss it here rather than there, I’d rather not have a link post. Let people who want to read more outside blogs do so, rather than “aggregating”.
I would be more inclined to read outside rationality-adjacent blogs if there were some form of familiar-feeling (as opposed to a new website) aggregation than I would be if there were none, and I had to actively search them out.
CGP Grey has read Bostrom’s Superintelligence.
Transcript of the relevant section:
I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.
He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.
Grey’s video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn’t too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it’s cool to see that it happened.
http://lesswrong.com/lw/nfk/lesswrong_2016_survey/
Friendly weekly reminder that the survey is up and you should take it.
Another friendly reminder that you can take it even if you do not have a LessWrong account.
[LINK]
Slate Star Codex Open Thread
There seems like some relevant stuff this week:
Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they’ve fallen on some hard times. I don’t know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.
Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yourself reading lots of SSC links this week.
Scott is moving back to the Bay Area next year, and is looking for doctors from the area to talk to about setting himself up with a job as a psychiatrist.
The moon may be why earth has a magnetic field. If it takes a big moon to have a life-protecting magnetic field, this presumably affects the Fermi paradox.
EY arguing that a UFAI threat is worth considering—as a response to Bryan Caplan’s scepticism about it. I think it’s a repost from Facebook, though.
ETA: Caplan’s response to EY’s points. EY answers in the comments.
But, isn’t this what he’s been saying for years? What’s the point in posting about it?
Caplan posted that he was skeptical, Yudkowsky responded with “which part of this argument do you disagree with?”
EY warns against extrapolating current trends into the future. Seriously?
Why does that surprise you? None of EY’s positions seem to be dependent on trend-extrapolation.
Trend extrapolation is more reasonable than invoking something that hasn’t happened at all yet, and then claiming, “When this happens, it will become an unstoppable trend.”
It would be more reasonable to use trend-extrapolation if it was a field where you would necessarily be able to discern a trend. Yudkowsky argues there could be sharp discontinuities. Personally I don’t really feel qualified to have a strong opinion, and I would not be able to discern a trend even if it exists.
Other than a technological singularity with artificial intelligence explosion to a god-like level?
I don’t believe that prediction is based on trend-extrapolation. Nothing like that has ever happened, so there’s no trend to draw from.
You are right about the singularity, but the underlying trend extrapolation is that of technical progress and, specifically, of software getting smarter.
Nowadays people got used to rapid technical progress and often consider it, um, inevitable. A look at history should disabuse one of that notion, though.
Yudkowsky explicitly doesn’t believe in rapid technical progress. He’s talked about the fact that he believes in the Great Stagnation (slowdown in science/tech/economic progress) which is possibly a good thing since it may retard the creation of AGI, giving people a better shot to work on friendliness first.
Links? What is “rapid”? Did he look at his phone recently?
The Great Stagnation is phenomenon on the time scale of decades. How about the time scale of centuries?
Here is one: https://www.facebook.com/yudkowsky/posts/10152586485749228 .
Yes, he believes in the Great Stagnation. That does not imply he doesn’t believe in rapid technological progress. Again, what is “rapid”?
If you don’t have a good primary care doctor or are generally looking to trade money for health, and live in the Bay Area, Phoenix, Boston, NY, Chicago, or Washington DC, I’d recommend considering signing up for One Medical Group, which is a service that provides members with access to a network of competent primary doctors, as well as providing other benefits. They do charge patients a $150 yearly membership fee in additional to charging co-pays similar to what you’d pay at any other primary care physician’s office, but in return for this, they hire more competent doctors, employ a large support staff that can nudge you to take care of outstanding health concerns, and are generally good at talking you into taking preventative measures to safeguard your health.
(My only incentive for posting this is that I want LessWrongers to be healthy. My reasoning is roughly that if you’re willing to spend money on cryonics, then you’d probably be willing to spend money on quality preventative healthcare, too).
Added: I benefited quite a bit from signing up with them for the specific reason that the fact that so many of their doctors and staff are so kind nudged me to be less afraid of going to the doctor. This made it easier for me to take preventative steps toward being healthier in general.
I’m a One Medical member. The single biggest draw for me is that you can get appointments the same or next day with little or no waiting time—where my old primary care doctor was usually booked solid for two weeks or more, by which point I’d either have naturally gotten over whatever I wanted to see him for, or have been driven to an expensive urgent care clinic full of other sick people.
They don’t bother with the traditional kabuki dance where a nurse ushers you in and takes your vitals and then you wait around for fifteen minutes before the actual doctor shows, either—you see a doctor immediately about whatever you came in for, and you’re usually in and out in twenty minutes. It’s so much better of a workflow that I’m astonished it hasn’t been more widely adopted.
That said, they don’t play particularly nice with my current insurance, so do your homework.
Be careful. They charge a lot more for services, due to how they bill.
What do LessWrongers think of terror management theory? It has it’s roots in Freudian psychoanalysis, but it seems to be getting more and more evidence supporting it (here’s a 2010 literature review)
Imagine a case of existential risk, in which humanity needs to collectively make a gamble.
We prove that at least one of 16 possible choices guarantees survival, but we don’t know which one.
Question: can we acquire a quantum random number that is guaranteed to be independent from anything else?
I.e. such that the whole world is provably guaranteed to enter quantum superposition of all possible outcomes, and we provably survive in 1/16th of the worlds?
Are you assuming a non-MWI universe? Doesn’t every source of randomness just imply different branches in proportion to their amplitude?
I’m assuming MWI, but noticing that NOT every source of “randomness” implies different branches.
Some things may, or may not, be interconnected in ways we don’t see, and can’t detect. (E.g. it’s been shown that humans can learn to flip coins predictably etc.)
My point is to provably, with as high confidence as we require split the world into different branches, as opposed to having a pretty good chance but we don’t know exactly how good of creating different branches.
You could get really random number using cosmic rays of remote quasars, but I think that true quantum randomness is not necessary in this case. Big world immortality could work anyway—there are many other earthes in the multiverse.
Superposition is also may be not necessary for QI to work. It may be useful if you want to make some kind of interaction between different outcomes, but it seems impossible for such large system.
The main thing which I would worry about, if I try to use QI to survive x-risks, is that the death of all civilization should be momentary. If it is not momentary, where will be a period of time when observers will know that given risk has began but they didn’t die yet, and so they will be unable to “jump” to another outcome. Only false vacuum decay provide momentary death for everybody (but not exact simultaneous given Earth size of 12 000 km and limited speed of light).
Another option of using QI to survive x-risks is see that me-observer must survive any x-risks, if QI is true. So any x-risks will have at least one survivor, one wounded man on empty planet.
We could use this effect to ensure that a group of people survive, if we connect me-observer with that group by necessary condition of dying together. For example, we all locked in the submarine full of explosives. In most of the worlds there are two outcomes: all the crew of the submarine dies, or everybody survive.
If I am in such submarine, and QI works, we—all the crew—probably survive any x-risk.
In short the idea is to convert slow x-risks into a momentary catastrophe for a group of people. The same way we may use QI personally to fight slow dying from aging, if we sign up for cryonics.
Errr.
Let me put it like this: it all adds up to normality.
No matter how many times you say “quantum”, it doesn’t make it a good idea to fill submarines with bombs.
If your reasoning leads you to believe otherwise, you’d better check your reasoning again.
Hint: Let’s say I play in a lottery. Do I make the situation better for myself, if construct a device that destroys Earth and kills all possible observers in case I don’t win the lottery?
Seeing my point here?
I don’t say quantum—so called QI works in the big world which may be non quantum, just large enough to include many copies of the Earth.
Egan law—it is not Egan’s, it is not a law and it is not true, and some one who tried to question it was extensively downvoted and leaved LW.
Most submarines are already filled with bombs.
Anyway, I think that big world immortality has small chance to actually work (in predictable way), so it would not be wise to use it as any first line defence. But it may be used it third line defence.
For example, if I build a space ship to survive impending catastrophe on Earth, I also use that fact its most likely catastrophic modes will kill all crew in one moment.
The same thing about cryonics. It is good itself, but if big world immortality works, it will be even better.
Who? The last LW kerfuffle I can remember that involved Egan’s law (and some other Egan things) was around Eitan_Zohar who did indeed get a lot of downvotes and does seem to have left LW. But so far as I can tell he didn’t get those downvotes for trying to question “Egan’s law”.
The post I meant is here http://lesswrong.com/lw/dlb/adding_up_to_normality/
Interesting. The general consensus in that thread seems to have been that the user in question was missing the point somehow, and −3 isn’t really such a terribly low score for something generally thought to have been missing the point. (I guess it was actually +6 −9.)
I don’t think the poor reception of “Adding up to normality” is why the user in question left LW. E.g., this post was made by the same user about 6 months later, so clearly s/he wasn’t immediately driven off by the downvotes on “Adding up to normality”.
Anyway. I think I agree with the general consensus in that thread (though I didn’t downvote the post and still wouldn’t) that the author missed the point a bit. I think Egan’s law is a variant on a witticism attributed to Wittgenstein. Supposedly, he and a colleague had a conversation like this. W: Why did anyone think the sun went round the earth? C: Because it looks as if it does. W: What would it have looked like, if it had looked as if the earth went round the sun? The answer, of course, being that it would have looked just the way it actually does, because the earth does go round the sun and things look the way they do.
Similarly (and I think this is Egan’s point), if you have (or the whole species has) developed some attitude to life, or some expectation about what will happen in ordinary circumstances, based on how the world looks, and if some new scientific theory that predicts that the world will look that way, then either you shouldn’t change that attitude or it was actually inappropriate all along.
Now, you can always take the second branch and say things like this: “This theory shows that we should all shoot ourselves, so plainly if we’d been clever enough we’d already have deduced from everyday observation that we should all shoot ourselves. But we weren’t, and it took the discovery of this theory to show us that. But now, we should all shoot ourselves.” So far as I can tell, appealing to Egan’s law doesn’t do anything to refute that. It just says that if something is known to work well in the real world, then ipso facto our best scientific theories tell us it should work well in the world they describe, even if the way they describe that world feels weird to us.
I agree with the author when s/he writes that correct versions of Egan’s law don’t at all rule out the possibility that some proposition we feel attached to might in fact be ruled out by our best scientific theories, provided that proposition goes beyond merely-observational statements along the lines of “it looks as if X”.
So, what about the example we’re actually discussing? Your proposal, AIUI, is as follows: rig things up so that in the event of the human race getting wiped out you almost certainly get instantly annihilated before you have a chance to learn what’s happening; then you will almost certainly never experience the wiping-out of the human race. You describe this by saying that you “probably survive any x-risk”.
This seems all wrong to me, and I can see the appeal of expressing its wrongness in terms of “Egan’s law”, but I don’t think that’s necessary. I would just say: Are you quite sure that what this buys you is really what you care about? If so, then e.g. it seems you should be indifferent to the installation of a device at your house that at 4am every day, with probability 1⁄2, blows up the house in a massive explosion with you in it. After all, you will almost certainly never experience being killed by the device (the explosion is big and quick enough for that, and in any case it usually happens when you’re asleep). Personally, I would very much not want such a device in my house, because I value not dying as well as not experiencing death, and also because there are other people who would be (consciously) harmed if this happened. And I think it much better terminology to describe the situation as “the device will almost certainly kill me” than as “the device will almost certainly not kill me”, because when computing probabilities now I want to condition on my knowledge, existence, etc., now, not after the relevant events happen.
Am I applying “Egan’s law” here? Kinda. I care about not dying because that’s how my brain’s built, and it was built that way by an evolutionary process formed in the actual world where a lineage isn’t any better off for having its siblings in other wavefunction-branches survive; and when describing probabilities I prefer to condition only on my present epistemic state because in most contexts that leads to neater formulas and fewer mistakes; and what I’m claiming is that those things aren’t invalidated by saying words like “anthropic” or “quantum”. But an explicit appeal to Egan seems unnecessary. I’m just reasoning in the usual way, and waiting to be shown a specific reason why I’m wrong.
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic.
Egan’s law is very vague in its short formulation. It is not clear, what is “all”, what kind of law is it—epistemic, natural, legal; what is normality—physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things.
But there are lot of strange things. Nukes were not normal before they were created, and if one apply Egan’s law before their creation, he may claim that they are not possible. Strong self improving AI also is something new on Earth, but we don’t use Egan’s law to disprove its possibility.
Your interpretation of Egan’s law is that everything useful should already be used by evolution. In case of QI it has some similarities to anthropic principle, by the way, so there is nothing new here from evolutionary point of view.
You also suggest to use Egan’s law as normative: don’t do strange risky things.
I could suggest more correct formulation of Egan’s law: it all adds up to normality in local surroundings (and in normal circumstances.)
And from this follows that than surrounding become large enough everything is not normal (think about black holes, sun became red giant, or strange quantum effects in small scale)
In local surrounding Newtonian, relativistic and quantum mechanics produce the same observations and the same visible world. Also in normal circumstances I will not put a bomb into my house.
But, as OP suggested, I know that soon 1 of 16 outcomes will happen, where 15 will kill the Earth and me, so my best strategy should not be normal. In this case going into a submarine with a diverse group of people capable to restore civilization may be best strategy. And here I get benefits even if QI doesn’t work, so it positive sum game.
I put only 10 per cent probability in QI to work as intended, so I will try any other strategy which have higher payoff (if I have any). That is why I will not put a bomb under my house in normal situations.
But there are situations there I don’t risk anything if I use QI, but benefit if it works. One of them is cryonics, to which I signed up.
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven’t fallen into such sloppiness myself.
No, I didn’t intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution’s goals, which of course need not be ours) then that isn’t generally invalidated by new discoveries about why the world is the way that’s made those things evolutionarily fruitful.
(Of course it could be, given the “right” discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
You could have deduced that I’d noticed that, from the fact that I wrote
but no matter.
I didn’t intend to say or imply that, either, and this one I don’t see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan’s law something like “If something is a terrible risk, discovering new scientific underpinnings for things doesn’t stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences”. Whether that applies in the present case is, I take it, one of the points under dispute.
I take it you mean might not be; it could turn out that even in this rather unusual situation “normal” is the best you can do.
I have never been able to understand what different predictions about the world anyone expects if “QI works” versus if “QI doesn’t work”, beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit—which is what you need to say e.g. “I will survive”—seems to me like a decision rather than a proposition, and I don’t know what it would mean to say that it does or doesn’t work.)
I’m not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you’d be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
Turchin may have something else in mind, but personally (since I’ve also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If “QI works”, this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.
Of course, it could be that if you’ve accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).
Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like “if the world was like the normal intuitions of most people say it is like”, in which case I still think there’s a world of difference between very small probability and very small measure.
I’m not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of “branches” (“branches” or “worlds” of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I’m in a submarine with turchin and x-risk is about to be realized, I don’t get how I could “expect” that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don’t understand how to adopt any other attitude.
Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically “so I’m immortal, yay; now I could play quantum russian roulette and make myself rich”; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say “I’m alive” if even a very small fraction of my original measure still exists.
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience “no existence”, but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1.
But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title “game over”, not experience of me alive on the ground.
I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn’t contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)
Whether or not momentary death is necessary for multiverse immortality depends on what view of personal identity is correct. According to empty individualism, it should not matter that you know you will die, you will still “survive” while not remember having died as if that memory was erased.
I think the point is that if extinction is not immediate, then the whole civilisation can’t exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.
It doesn’t really matter if it’s immediate according to empty individualism. Instead the chance of survival in the branches where you try to die must be much lower than the chance of choosing that world.
You can never make a perfect doomsday device, because all kinds of things could happen to make it fail at the moment or during preparation. Even if it operates immediately.
Mnemotechnics
Time Ferris’s recent podcsat with Carl Shurmann had a quote of a quote that stuck with me: ‘the good shit sticks’, said by a writer when questioned on how he remembers good thoughts when he’s constantly blackout drunk. That kind of ‘memory optimism’ as I call it seems a great way to mitigate memory doubt disorder which I’d guess is more common among skeptics, rationalist and other purveyors of doubt.
Innovation in education
Do your alma matta have anything resembling a academic decisions tribunal and administrative decisions tribunal?
We should establish an ‘academic teaching and administration tribunal’ to provide independent oversight on teaching quality and administrative decisions such as decisions whether or not to answer a particular student enquiry to which students,fellow staff and any other whistleblower can anonymously refer matters with less fear of repercussion. Sometimes, matters are too small to seek higher order intervention, like the coordinator of your degree, when a subject coordinator messes up. However, these little problems can have a serious impact on the student, too. Right now, the only independent redress are courts of law with nothing in between.
Webapp ideas
Service that will ‘publish all emails to my website or through an external service’ if my account becomes inactive, say due to death or missing. I know I can have my account data shared with trusted contacts, but how about a service?
Service to sustain webpage maintenance after death without others maintaining it
google login bot to login to gmail account every 8 months so it doesn’t get deleted...say if you do cryonics..
job application service (will apply for jobs on your behalf) that doesn’t have a super unprofessional page or lots of spelling mistakes on their website and customer service
Other ideas
Volunteer staffed childcare centre—many people would pay to take care of cute children...but only for a couple of hours. Many people do pay others to take care of their children. Both volunteers and paid staff can get working with children checks to screen against pedos, and paid staff working full hours could manage the surge capacity and time in-between volunteers. But, it would make a compelling business case.
Personal development
I’m not satisfied with my personality, relationships and I don’t have a clear sense of the way I make meaning from the world. So, I’ll read the wikipedia page on meaning, npd and bpd which I think I have, and follow radical, formulaic process oriented apporaches to how I relate to people from no onwards.
I reckon it’s the stress of university bringing on the recent tanking in my mood.
On the flip side, I get the positive emotions, engagement, relationships (sorta) and achievement parts of the PERMA model of human flourishing downpat, and have staved off enduring depression and anxiety disorder traits for a while, not to mention psychosis.
New roommates
Familiarity licenses others with freedom of action, and deprives the familiar with freedom from inteference.
How much people want to “take care of cute children but only for a few hours” might be a (very?) bad predictor of how good they are at taking care of children.
~~~
I think gmail just doesn’t cut it if you want to store your information reliably while you are vitrified for many years. Also, why in the world would you protect your old e-mails, of all things?
childcare is a lucrative business already; there is probably nothing (short of administration work) stopping an existing childcare business taking on volunteers (talking about the most easy way to make this happen by slightly modifying the existing world). But I don’t know of many (any) people willing to do that kind of thing.
Volunteers are a tricky business too. As is duty of care towards children, in more than just the assumed privacy, protection, but also in the direction of positive stimulating environment (which becomes more difficult to prove when relying on volunteers)
I’m not able to see the post Ultimate List of Irrational Nonsense on my Discussion/New/ page even though I have enabled the options to show posts that have extremely negative vote counts (-100) while signed in. I made a request in the past about not displaying those types of posts for people who are not signed in. I’m not sure if that’s related to this or not.
It’s not the karma: I can see Gleb’s post on Brussels with a lower score and a lower %, but not that post. (When not logged in I can’t see the Brussels post.) Probably it was “deleted.” That is a state where the permalink continues to work, but the post does not appear on various indices. I think that if the author or moderator wants to break the permalink, they have to click “ban.” Deleting an account does not delete all the posts, at least in the past.
OP deleted the account used to post the article
Would there be a fanfic about how Cassandra did not tell people the future, but simply ‘what not to do’, lied and schemed her way to the top and saved Troya...
This exists, at least.
There’s this powerful one-page fanfic.
Thank you so much! A friend of mine has a thing about Cassandra, she’ll love it.
g
People with narcissistic personality disorder should be offered avenues and support for treatment not manipulated reciprocally
If they gaslight and you are susceptible to it stop fighting them and retreat. They will win.
Gang affiliation and violent behaviour suggests you should keep safe and avoid them. That’s why we have police, in case they trip up.
Choose your friends
uh, except this guy is unethical and i’m unsure what avenue to pursue to minimize the risk of future injury to other people. since i know him the best, i’m relatively sure he does not need support nor will accept it. he is not being “manipulated” he is just straightforwardly unethical.
there’s no way this retard would accept treatment >_>.
On the practicalities of catastrophe readiness.