Just this guy, you know?
Dagon
I mean, if there were ANY widely-available, repeatable, +ev bets, they’d pretty quickly get dried up by just a few players who can spell “Kelly”.
There are LOTS of negative EV bets available, if you think the distribution of 0.005% chance of $1000000 and 99.995% of $0 is better than a straight $1000.
Also, you don’t necessarily need to think about investment strategy or influencing corporate decisions in a coop, since you can grant someone a proxy.
You definitely need to think about these things to value working in a coop (or a corporation in which part of your compensation is voting stock) vs “just a job”. If you are going to just grant a proxy, you’d prefer to be paid more in money and less in control.
Also also, why are socialist-vibe blogposts so often relegated to “personal blogpost” while capitalist-vibe blogposts aren’t? I mean, I get the automatic barrage of downvotes, but you’d think the mods would at least try to appear impartial.
I upvoted, but I don’t expect it to be particularly popular or front-page-worthy. It may be partly about the vibe, but I suspect it’s mostly about the content—it’s a little less rigorous in causality of impact than the more common front-page topics, and it comes across as an attempt to influence rather than to explore or analyze from a rational(ist) standpoint.
There’s an important question of scale here. One size almost certainly does not fit all, and the best governance for a multinational many-billion-dollar enterprise is different from that of a local consumer-service organization.
Also, there’s a large group of people who seem to prefer to have a “pure employment” model, without having to think about investment strategy or influencing corporate decisions.
If you spend 8000 times less on AI alignment (compared to the military),
You must also believe that AI risk is 8000 times less (than military risk).[1]
No. You must believe that spending on military is 8000 times more helpful to your goals. And really, in a democracy or other multilateral decision framework, nobody actually has to believe this, it just has to be 8000 times easier to agree to spend a marginal amount, which is quite path-dependent.
Even if you DO believe the median estimates as given, you have to weight it by the marginal change that spending makes. Military spending keeps the status quo, rewards your constituents, makes you look good, etc. AI spending is … really confusing and doesn’t really help any political goals. It’s absolutely not clear that spending more can increase safety—the obvious thing that happens when you spend is acceleration, not slowdown.
Ah, yes—bargaining solutions that ignore or hide a significant underlying power disparity are rampant in wishful-thinking academic circles, and irrelevant in real life. That’s the context I was missing; my confusion is resolved. Thanks!
I’m missing some context here. Is this not obvious, and well-supported by the vast majority of “treaties” between europeans and natives in the 16th through 19th centuries? For legal settlements, it’s generally between the extremes that each party would prefer, but it’s not always the case that this range doesn’t include “quite bad”, even if not completely arbitrary.
“We’ll kill you quickly and painlessly” isn’t actually arbitrarily bad, it’s only quite bad. There are possibly worse outcomes available if no agreement was available.
Overall this feels comfortable and reasonable to me in some situations, but I had a very strong negative reaction to the opening, as I applied it to other situations until I’d read the whole thing.
one is constantly called to account for one’s behavior. At any moment, one may be asked “what are you doing?” or “why did you do that?” And one is expected to provide a reasonable answer.
This sounds like a nightmare. But that depends a whole lot on the frequency and intensity of such questions and discussion. “constantly called to account” just isn’t going to work for me. “able to discuss goals and behaviors when useful and appropriate” is mandatory for happy coexistence (for me). And they’re the same thing, just slight variants.
I think the key underlying context to call out is “presumption of alignment”. Among people who overall share a philosophy and at least some goals, this all just works. Among less-trusted acquaintances, it does not.
The ecosystem (econo-system?) of drug regulation and approval is the primary cost/required-investment for much of this. The tension of protecting the profits and making sure all agencies and participants get their cut against selling the system as protecting the public is really hard to break.
One of the biggest online threats to rational discourse,
If this is true, I’m relieved, because it means there are no serious threats. But I doubt it.. I hadn’t heard the name for a number of years, and even in the heyday it only mattered in a tiny part of a tiny sub-community.
You can put those options into .ssh/config, which makes it work for things which use SSH directly (scp, git, other tools) when they don’t know to go through your script.
Thanks for writing this, but my personal experience of valuing things is a direct contradiction to this. Almost all valuations have some kind of non-linear aggregation. “Declining marginal utility” is observationally and reflectively true for me, at least, and there are many cases outside myself which are more consistent with nonlinear aggregation than linear.
In a lot of cases, the margin is tiny, so it’s hard to notice and not very important. Going from 9 billion to 9.01 or 9.5 billion is close to linear. Going from 0 to 1 or 1 to 2 or 9 to 10 is often VERY different in utility-change.
Interesting, but I worry that the word “Karma” as a label for a legibly-usable resource token makes it VERY different from common karma systems on social websites, and that the bid/distribute system is even further from common usage.
For the system described, “karma” is a very misleading label. Why not just use “dollars” or “resource tokens”?
The rabbit hole can go deep, and probably isn’t worth getting too fancy for single-digit hosts. Fleets of thousands of spot instances benefit from the effort. Like everything, dev-time vs runtime-complexity vs cost-efficiency is a tough balance.
When I was doing this often, I had different modes for “dev mode, which includes human-timeframe messing about” and “prod mode”, which was only for monitored workloads. In both cases, automating the “provision, spin up, and initial setup”, as well as the “auto-shutdown if not measurably used for N minutes (60 was my default)” with a one-command script made my life much easier.
I’ve seen scripts (though I don’t have links handy) that do this based on no active logins and no CPU load for X minutes as well. On the other tack, I’ve seen a lot of one-off processes that trigger a shutdown when they complete (and write their output/logs to S3 or somewhere durable). Often a Lambda is used for the control plane—it responds to signals and runs outside the actual host.
There’s a big presumption there. If he was a p-zombie to start with, he still has non-experience after the training. We still have no experience-o-meter, or even a unit of measure that would apply.
For children without major brain abnormalities or injuries, who CAN talk about it, it’s a pretty good assumption that they have experiences. As you get more distant from your own structure, your assumptions about qualia should get more tentative.
Do you think that as each psychological continuations plays out, they’ll remain identical to one another?
They’ll differ from one another, and differ from their past singleton self. Much like future-you differs from present-you. Which one to privilege for what purposes, though, is completely arbitrary and not based on anything.
Which psychological stream one-at-the-moment-of-brain-scan ends up in is a matter of chance.
I think this is a crux. It’s not a matter of chance, it’s all of them. They all have qualia. They all have continuity back to the pre-upload self. They have different continuity, but all of them have equally valid continuity.
Think of it like this: if one had one continuation in which one lived a perfect life, one would be guaranteed to live that perfect life. But if one had 10 copies in which one lived a perfect life, one does benefit at all. It’s the average that matters.
Sure, just like if a parent has one child or 10 children, they have identical expectations.
I think we’re unlikely to converge here—our models seem too distant from each other to bridge. Thanks for the post, though!
Reminder to all: thought experiments are limited in what you can learn. Situations which are significantly out-of-domain for our evolved and trained experiences simply cannot be analyzed by our intuitions. You can sometimes test a model to see if it remains useful in novel/fictional situations, but you really can’t trust the results.
For real decisions and behaviors, details matter. And thought experiments CANNOT provide the details, or they’d be just situations, not hypotheticals.- Mar 27, 2025, 6:47 PM; 14 points) 's comment on An argument for asexuality by (
Once we identify an optimal SOA
This is quite difficult, even without switching costs or fear of change. Definition of optimal is elusive, and most SOA have so many measurable and unmeasurable, correlated and uncorrelated factors to them that comparison is not directly possible.
Add to this the common moral beliefs (incorrect IMO, but still very common) of “inaction is less blameworthy than wrong action, and only slightly blameworthy compared to correct action”, and there needs to be a pretty significant expected gain from switching in order to undertake it.With that in mind, suppose you are asexual. Would you take a pill to make you not asexual?
I’m not asexual, but sex is less important to me than for most humans, as far as I can tell. I know of no pills to shift in either direction that are actually effective and side-effect-free, and it’s not meta-important to me enough to seek out change in either direction. This does NOT mean that I judge it optimal, just that I think the risk and cost of adjusting myself to be higher than the value.
In fact, I suspect such pills would be very popular if they existed, and I would likely try them out if common, to find out if it’s actually better in either direction.
You could make this argument about a LOT of things—for any trait or metric about yourself, why is this exact value the best one? Wouldn’t you like to raise or lower it? In fact, most people DO attempt to change things about themselves. It’s just not actually as easy as taking a pill, so the cost of actually working toward a change is nonzero, and can’t be handwaved away.
Wow, a lot of assumptions without much justification
Let’s assume computationalism and the feasibility of brain scanning and mind upload. And let’s suppose one is a person with a large compute budget.
Already well into fiction.
But one is not both. This means that when one is creating a copy one can treat it as a gamble: there’s a 50% chance they find themselves in each of the continuations.
There’s a 100% chance that each of the continuations will find themselves to be … themselves. Do you have a mechanism to designate one as the “true” copy? I don’t.
What matters to one is then the average quality of one’s continuations.
Disagree, but I’m not sure that my preference (some aggregation function with declining marginal impact) is any more justifiable. It’s no less.
Before even a small fraction of one’s life has played out, one’s copy will bear no relation to oneself. To spend one’s compute on this person, effectively a stranger, is just altruism. One would be better off donating the compute to ASI.
Huh? This supposes that one of them “really” is you, not the actual truth that they all are equal continuations of you. Once they diverge, they’re still closer to twin siblings to each other, and there is no fact that would elevate one as primary.
If you go the casino route, craps is slightly better. Don’t pass is 1.40% house edge, and they let you take (or lay, for don’t pass) “free odds” on a point, which is 0% (pays true odds of winning), getting it down below a percent. Taxes may not matter if you can deduct the full charitable contribution.
Note that if you had a perfect 50% even-money bet, you’d have to win 10 times to turn your $1000 into $1.024M. 0.5 ^ 10 = 0.000977, so you’ve got almost a tenth of a percent of winning.