Here is a list of all my public writings and videos.
If you want to do a dialogue with me, but I didn’t check your name, just send me a message instead. Ask for what you want!
Here is a list of all my public writings and videos.
If you want to do a dialogue with me, but I didn’t check your name, just send me a message instead. Ask for what you want!
If you enjoy The Big Short (2015), you may enjoy Margin Call (2011) too. It covers similar territory (what to do in a market crash), but I feel is more professional and dispassionate.
I didn’t know about that. That sounds like fun!
In my experience, there’s two main cases of “trying to do good but fails and ends up making things worse”.
You try halfheartedly and then give up. This happens when you don’t care much about doing good.
You do something in the name of good but don’t look too closely at the details and end up doing harm.
#2 is particularly endemic in politics. The typical political actor puts barely any effort into figuring out if what they’re advocating for is actually good policy. This isn’t a bug. It’s by design.
I liked the ending of this story.
No, but you can create an alt account.
If you don’t think OpenAI is going to make trillions reasonably often, and also pay them out, then you should want to sell your stake, and fast.
And vice-versa. I bought a chunk of Microsoft a while ago, because that was the closest thing I could do to buying stock in OpenAI.
Thanks!
This post makes me feel better about my writing process. I write how I think, which means I can get away with little editing.
I think the answer is: the homunculus concept has a special property of being intrinsically attention-grabbing…. The homunculus is thus impossible to ignore—if the homunculus concept gets activated at all, it jumps to center stage in our minds.
I don’t fully understand this bit. I feel like I’m reading a mathematical proof where the author leaves out steps that are trivial to the author, but not to me.
If the kid is enjoying the robot stories then that’s definitely the place to start. Foundation goes well after robots.
Besides abstractapplic’s excellent answer,
A Brief History of Time and The Universe in a Nutshell by Stephen Hawking
Ender’s Game by Orson Scott Card
Foundation by Isaac Asimov
The Martian by Andy Weir
Paleontology: A Brief History of Life by Ian Tattersall
Richard Feynmann’s books
If you value doing good, then your values will be satisfied better by living in a horrible world than a utopia.
I worry about spoiling your story.
Don’t worry about spoiling the story. I write these stories with the comment section in mind. Because the comments here are so good, I can write harder puzzles than would otherwise be publishable. (Also, your comments are great, in general, and I want to encourage them.)
It’s been two years since I’ve published this story. I feel that enough time has passed that I can answer some of your questions.
Spoilers below, I guess.
One tricky thing about writing a public forum is you have to satisfy multiple audiences at once. Some people do this by dumbing things down as far as possible. Others do it by tediously defining terms at the beginning, or scaring away their non-target audience. I like to write stories that mean different things to different people. Sometimes it happens by accident. This time it was deliberate.
To put things simply, I wrote for two groups of people.
People who are confused about whether ethics is objective or subjective. I once earned the respect of a student by tripping him into contradicting himself on this subject. I got him to make the following three claims: (1) ethics must be objective or subjective, (2) ethics is not objective, and (3) ethics is not subjective. He realized he had contradicted himself, but couldn’t find the error. Then, instead of telling him where he had made a mistake, I just let him wrestle with the paradox. It was fun! In my model of the world, most people fall into this category, simply because they haven’t thought very hard about philosophy. People on this website are the exception. For the unrelfective majority, my story is an exercise to help them learn how to think.
For people who aren’t confused about whether ethics is objective or subjective, this story isn’t a puzzle at all. It is a joke about D&D-style alignment systems.
As for honor systems, I can’t count how many times I’ve tried to explain them to modern-day leftists. It’s usually way too advanced for them. Instead, I start with simpler, concrete things, like how Native Americans fought wars, or how British impressment interacted with the American national identity in the Napoleonic Wars. I need to throw dirt into the memetic malware before I can explain alien ideas.
It made me think that maybe you’re better calibrated than I am about normal elites, and made it slightly plausible (given apparent base rates) that… maybe you agree with them?
You flatter me.
But maybe it is NOT a lack of understanding of honor or duty or deputation? Maybe the breakdown involves a lack of something even deeper?
It’s the legacy of postmodernism, and all its offspring, including Wokism.
But to answer your real question, what we call “ethics” is an imprecise word with several reasonable definitions. Much like the word “cat” can refer to a chibi drawing of a cat or the DNA of a cat, the word “ethics” fails to disambiguate between several reasonable definitions. Some of these reasonable definitions are objective. Others are subjective. If you’re using a word with reasonable-yet-mutually-exclusive definitions and the person you’re talking with believes such a thing is impossible (many people do), then you can play tricks on them.
I love your epistemic standard here. Childhood trauma is indeed blamed on many things which aren’t the result of childhood trauma. I believe this particular anecdote is an exception for various reasons (especially the use of LSD).
But the most interesting part of your comment is consideration of the counterfactual. Let’s assume that DID isn’t causing false reports of child trauma. (This is why the report of child abuse must be credible. If false reports of child abuse can be created, then this goes out the window.)
Now consider the priors and posteriors.
I’ve met (within an order of magnitude) 300 people in my life who I know this amount of information on. The prior probability that this person has the highest child trauma is 0.3%. I’ve also met one person who reports DID. If I met one person with DID and DID is uncorrelated with childhood trauma, then the prior odds that that person is also the person with highest child trauma is low, at only 0.3%.
If my prior probability estimate that extreme childhood trauma of this sort causes DID is a mere 10%, then my posterior probability that childhood trauma caused this instance of DID is 97%. In this way, I did consider the counterfactual.
Something useful in isolating the variables here is that DID isn’t going to cause this particular form of child abuse. However, mental illness can confound things by producing false reports of child abuse, a possibility I am ignoring in my calculation. I’m also ignoring common cause.
Of course, this is all from my perspective. From your perspective, my anecdote is contaminated by selection bias. Hearing a story of someone getting robbed is different from getting robbed yourself. Using this metaphor, I’ve been robbed, therefore I consider the crime rate to be high. You, however, have heard a nonrandom person tell a story of someone, somewhere being robbed, which you are right to ignore.
[Content warning: Child abuse.]
(3) Maybe childhood trauma directly causes BPD somehow;
I met one person who claimed to have BPD, and who attributed it to childhood trauma. He had the most acute symptoms of traumatic abuse I have ever observed. For that and other reasons, I consider his report credible.
In particular, he reported getting tortured as a kid while under LSD.
Given his history, I think it is perfectly reasonable to conclude that childhood experiences directly caused BPD.
I don’t know exactly when this was implemented, but I like how footnotes appear to the side of posts.
Thank you for the correction. I have changed “olavine rock” to “olavine vents”.
In terms of preserving a status quo in an adversarial conflict, I think a useful dimension to consider is First Strike vs. Second Strike. The basic idea is that technologies which incentivise a preemptive strike are offensive, whereas technologies which enable retaliation are defensive.
However, not all status-quo preserving technologies are defensive. Consider disruptive[1] innovations which flip the gameboard. Disruptive technologies are status-destroying, but can advantage the incumbent or the underdog. They can make attacks more or less profitable. I think “disruptive vs sustaining” is a different dimension that should be considered orthogonal to “offensive vs defensive”.
But I haven’t seen as much literature around what substitutes would look like for cyberattacks, sanctions, landmines (e.g. ones that deactivate automatically after a period of time or biodegrade), missiles etc.
Here’s a video by Perun, a popular YouTuber who makes hour-long PowerPoint lectures about defense economics. In it, cyberattack itself is considered a substitute technology used to achieve political aims through an aggressive act less provocative than war.
They might help countries to organise more complex treaties more easily, thereby ensuring that countries got closer to their ideal arrangements between two parties…. It might be that there are situations in which two actors are in conflict, but the optimal arrangement between the two groups relies on coordination from a third or a fourth, or many more. The systems could organise these multilateral agreements more cost-effectively.
Smart treaties have existed for centuries, though they didn’t involve AI. Western powers used them to coordinate against Asian conquests. Of course, they didn’t find the optimal outcome for all parties. Instead, they enabled enemies to coordinate the exploitation of a mutual adversary.
I’m using the term “disruptive” the way Clayton Christenson defined it in his book The Innnovator’s Dilemmma where “disruptive technologies” are juxtiposed against a “sustaining technology”.
Noted. The problem remains—it’s just less obvious. This phrasing still conflates “intelligent system” with “optimizer”, a mistake that goes all the way back to Eliezer Yudkowsky’s 2004 paper on Coherent Extrapolated Volition.
For example, consider a computer system that, given a number can (usually) produce the shortest computer program that will output . Such a computer system is undeniably superintelligent, but it’s not a world optimizer at all.
“Far away, in the Levant, there are yogis who sit on lotus thrones. They do nothing, for which they are revered as gods,” said Socrates.
You’re right. I just like the phrase “postmodern warfare” because I think it’s funny.