Just this guy, you know?
Dagon
The rabbit hole can go deep, and probably isn’t worth getting too fancy for single-digit hosts. Fleets of thousands of spot instances benefit from the effort. Like everything, dev-time vs runtime-complexity vs cost-efficiency is a tough balance.
When I was doing this often, I had different modes for “dev mode, which includes human-timeframe messing about” and “prod mode”, which was only for monitored workloads. In both cases, automating the “provision, spin up, and initial setup”, as well as the “auto-shutdown if not measurably used for N minutes (60 was my default)” with a one-command script made my life much easier.
I’ve seen scripts (though I don’t have links handy) that do this based on no active logins and no CPU load for X minutes as well. On the other tack, I’ve seen a lot of one-off processes that trigger a shutdown when they complete (and write their output/logs to S3 or somewhere durable). Often a Lambda is used for the control plane—it responds to signals and runs outside the actual host.
There’s a big presumption there. If he was a p-zombie to start with, he still has non-experience after the training. We still have no experience-o-meter, or even a unit of measure that would apply.
For children without major brain abnormalities or injuries, who CAN talk about it, it’s a pretty good assumption that they have experiences. As you get more distant from your own structure, your assumptions about qualia should get more tentative.
Do you think that as each psychological continuations plays out, they’ll remain identical to one another?
They’ll differ from one another, and differ from their past singleton self. Much like future-you differs from present-you. Which one to privilege for what purposes, though, is completely arbitrary and not based on anything.
Which psychological stream one-at-the-moment-of-brain-scan ends up in is a matter of chance.
I think this is a crux. It’s not a matter of chance, it’s all of them. They all have qualia. They all have continuity back to the pre-upload self. They have different continuity, but all of them have equally valid continuity.
Think of it like this: if one had one continuation in which one lived a perfect life, one would be guaranteed to live that perfect life. But if one had 10 copies in which one lived a perfect life, one does benefit at all. It’s the average that matters.
Sure, just like if a parent has one child or 10 children, they have identical expectations.
I think we’re unlikely to converge here—our models seem too distant from each other to bridge. Thanks for the post, though!
Reminder to all: thought experiments are limited in what you can learn. Situations which are significantly out-of-domain for our evolved and trained experiences simply cannot be analyzed by our intuitions. You can sometimes test a model to see if it remains useful in novel/fictional situations, but you really can’t trust the results.
For real decisions and behaviors, details matter. And thought experiments CANNOT provide the details, or they’d be just situations, not hypotheticals.- Mar 27, 2025, 6:47 PM; 14 points) 's comment on An argument for asexuality by (
Once we identify an optimal SOA
This is quite difficult, even without switching costs or fear of change. Definition of optimal is elusive, and most SOA have so many measurable and unmeasurable, correlated and uncorrelated factors to them that comparison is not directly possible.
Add to this the common moral beliefs (incorrect IMO, but still very common) of “inaction is less blameworthy than wrong action, and only slightly blameworthy compared to correct action”, and there needs to be a pretty significant expected gain from switching in order to undertake it.With that in mind, suppose you are asexual. Would you take a pill to make you not asexual?
I’m not asexual, but sex is less important to me than for most humans, as far as I can tell. I know of no pills to shift in either direction that are actually effective and side-effect-free, and it’s not meta-important to me enough to seek out change in either direction. This does NOT mean that I judge it optimal, just that I think the risk and cost of adjusting myself to be higher than the value.
In fact, I suspect such pills would be very popular if they existed, and I would likely try them out if common, to find out if it’s actually better in either direction.
You could make this argument about a LOT of things—for any trait or metric about yourself, why is this exact value the best one? Wouldn’t you like to raise or lower it? In fact, most people DO attempt to change things about themselves. It’s just not actually as easy as taking a pill, so the cost of actually working toward a change is nonzero, and can’t be handwaved away.
Wow, a lot of assumptions without much justification
Let’s assume computationalism and the feasibility of brain scanning and mind upload. And let’s suppose one is a person with a large compute budget.
Already well into fiction.
But one is not both. This means that when one is creating a copy one can treat it as a gamble: there’s a 50% chance they find themselves in each of the continuations.
There’s a 100% chance that each of the continuations will find themselves to be … themselves. Do you have a mechanism to designate one as the “true” copy? I don’t.
What matters to one is then the average quality of one’s continuations.
Disagree, but I’m not sure that my preference (some aggregation function with declining marginal impact) is any more justifiable. It’s no less.
Before even a small fraction of one’s life has played out, one’s copy will bear no relation to oneself. To spend one’s compute on this person, effectively a stranger, is just altruism. One would be better off donating the compute to ASI.
Huh? This supposes that one of them “really” is you, not the actual truth that they all are equal continuations of you. Once they diverge, they’re still closer to twin siblings to each other, and there is no fact that would elevate one as primary.
This is a topic where macro and micro have a pretty big gap.
If you’re asking about measured large-group unemployment, you probably don’t get very good causality from any given change, and there’s no useful, simple model of the motivations and frictions of potential-employeers and potential-employees. It’s a very complicated matching market.
If you’re asking about some specific reasons that an individual may be out of work or become out of work, you’ll get a lot better result and some concrete reasons. But everyone you talk to will say “that doesn’t scale!”.
At its most useless modeling level, unemployment happens when some people don’t want to (or aren’t allowed to) accept the wage that someone can and will offer.
I don’t understand the question. What intuition for not smoking are you talking about? CDT prefers smoking. Are you asking why EDT abstains from smoking? I’m not the best defender, as I don’t really think EDT is workable, but as I understand it EDT updates it’s world state based on actions, meaning that it prefers the world where you don’t have the lesion and don’t WANT to smoke.
The first one is only a metaphor—it’s not possible now, and we don’t know if it ever will be (because we don’t know how to scan a being in enough detail to recreate it well enough).
The second one is WAY TOO limited. If you put a radio anywhere near your head, or really any other-controlled media, you can be programmed. By trivial extension, you have been programmed. Get used to it.
Economists and other social theorists often take the concept of utility for granted.
Armchair economists and EAs even more so. Take for granted, and fail to document WHICH version of the utility concept they’re using.
For me, utility is a convenient placeholder for the underlying model that our ordinal preferences expressed through action (I did X, meaning I prefer the expected sum of value of outcomes likely from X). Utility is the “value” that is preferred. Note that it’s kind of a circular defining—it’s the thing that drives decisions, proven by the fact that actions take place.
More expansive uses of the term come about by forgetting that this definition doesn’t carry much information about anything. It would be nice if we could find underlying consistent preferences, and this would be a good term for the unification of them. And if they’re long-term consistent preferences, maybe it should add up over time to explain time-preferences. And if everyone is equal, then clearly we can sum this thing up to get a group value.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future
Correct. There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled “decision”. I decide to attend a play in London next month. This is an intent and a belief. It’s not guaranteed. I buy tickets for the train and for the show. The sub-decisions to click “buy” on the websites are in the past, and therefore committed. The overall decision has more evidence, and gets more confident. The cancelation window passes. Again, a bit more evidence. I board the train—that sub-decision is in the past, so is committed, but there’s STILL some chance I won’t see the play.
Anything you call a “decision” that hasn’t actually already happened is really a prediction or an intent. Even DURING an action, you only have intent and prediction. While the impulse is traveling down my arm to click the mouse, the power could still go out and I don’t buy the ticket. There is past, which is pretty immutable, and future, which cannot be known precisely.
I think this is compatible with Spohn’s example (at least the part you pasted), and contradicts OP’s claim that “you did not make a decision” for all the cases where the future is uncertain. ALL decisions are actually predictions, until they are in the past tense. One can argue whether that’s a p(1) prediction or a different thing entirely, but that doesn’t matter to this point.
”If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.” is actually good directional advice in many cases, but it’s factually simply incorrect.
When the decision is made, consideration ends. The action must be wholehearted in spite of uncertainty.
This seems like hyperbolic exhortation rather than simple description. This is not how many decisions feel to me—many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it’s distinct in time from the action itself.
I do agree with this as advice, in fact—many decisions one faces should be treated as a commitment rather than an ongoing reconsideration. It’s not actually true in most cases, and the ability to change one’s plan when circumstances or knowledge changes is sometimes quite valuable. Knowing when to commit and when to be flexible is left as an excercise...
I only see one downvoted post, and a bunch of comments and a few posts with very low voting at all. That seems pretty normal to me, and the advice of “lurk for quite a bit, and comment occasionally” is usually good for any new users on any site.
A lot depends on what you mean by “required”, and what specific classes or functions you’re talking about. The core skill of committing a position to writing and supporting it with logic is never going away. It will shift from “do this with minimal spelling and grammar assistance” to “ensure that the prompt-review-revise loop generates output you can stand behind”.
This is already happening in many businesses and practical (grant-writing) aspects of academia. It’ll take a while for undergrad and MS programs to admit that their academic theories of what they’re teaching needs revision.
This seems generally applicable. Any significant money transaction includes expectations, both legible and il-, which some participants will classify as bullshit. Those holding the expectations may believe it to be legitimately useful, or semi-legitimately necessary due to lack of perfect alignment.
If you want to specify a bit, we can probably guess at why it’s being required.
[Note: I apologize for being somewhat combative—I tend to focus on the interesting parts, which is those parts which don’t add up in my mind. I thank you for exploring interesting ideas, and I have enjoyed the discussion! ]
I was only saying that I don’t see anything proving it won’t work
Sure, proving a negative is always difficult.
I agree that this missile problem shouldn’t happen in the first place. But it did happen in the past
Can you provide details on which incident you’re talking about, and why the money-bond is the problem that caused it, rather than simply not having any communications loop to the controllers on the ground or decent identification systems in the missile?
Interesting, but I worry that the word “Karma” as a label for a legibly-usable resource token makes it VERY different from common karma systems on social websites, and that the bid/distribute system is even further from common usage.
For the system described, “karma” is a very misleading label. Why not just use “dollars” or “resource tokens”?