I always found that aspect weak. It is clearly and sadly evident that utility pessimization (I assume roughly synonymous with coercion?) is effective and stable, both on Golarion and Earth. Yet half the book seems to be gesturing at what a suboptimal strategy it is without actually spelling out how you can defeat an agent who pursues such a strategy (without having magic and some sort of mysterious meta-gods on your side).
bokov
Update:
I went and read the background material on acausal trade and narrowed even further where it is I’m confused. It’s this paragraph:
> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example: We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.
My problem is lack of evidence that genuine caring about entities with which one can never interact really is “quite common even for humans today”, after factoring out indirect benefits/costs and social signalling.
How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work?
Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?
Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.
Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.
1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?
2. Are these civilizations we haven’t met yet but might someday, or are these ones that are impossible for us to meet even in theory (parallel universes, far future, far past, outside our Hubble volume, etc.)? Because other acausal stuff I’ve read seems to imply the latter in which case...
2a. If I don’t care what civilizations do (to include “simulating” me) unless it’s possible for me or people I care about to someday meet them, do I have any reason to care about acausal trade?
3. Can you give any specific examples of what it would be like for an acausally separate civilization to invade our local causal environment which do NOT depend in any way on simulations?
4. I heard that acausal decision theory has practical applications in geopolitics, though unfortunately without any real-world examples. Do you know any concrete examples of using acausal trade or acausal norms to improve outcomes when dealing with ordinary physical people whom you cannot directly communicate?
I realize you probably have better things to do than educating an individual noob about something that seems to be common knowledge on LW. For what it’s worth, I might be representative of a larger group of people who are open to the idea of acausal decision theory but who cannot understand existing explanations. You seem like an especially down-to-earth and accessible proponent of acausal decision theory, and you seem to care about it enough to have written extensively about it. So if you can help me bridge the gap to fully getting what it’s about, it may help both of us become better at explaining it to a wider audience.
What is meant by ‘reflecting’?
reflecting on {reflecting on whether to obey norm x, and if that checks out, obeying norm x} and if that checks out, obeying norm x
Is this the same thing as saying “Before I think about whether to obey norm x, I will think about whether it’s worth thinking about it and if both are true, I will obey norm x”?
I’ve been struggling to understand acausal trade and related concepts for a long time. Thank you for a concise and simple explanation that almost gets me there, I think...
Am I roughly correctly in the following interpretation of what I think you are saying?
Acausal norms amount to extrapolating the norms of people/aliens/AIs/whatever whom we haven’t met yet and know nothing about other than what can be inferred from us someday meeting them. If we can identify norms that are likely to generalize to any intelligent being capable of contact and negotiation and not contingent on any specific culture/biology/happenstance, then we can pre-emptively obey those norms to maximize the probability of a good outcome when we do meet these people/aliens/AIs/whatever?
Would you mind sharing how you allocated the ratio of these positions?
Maybe the key is not to assume the entire economy will win, but make some attempt to distinguish winners from losers and then find ETFs and other instruments that approximate these sectors.
So, some wild guesses...
AI labs and their big-tech partners: winners
Cloud hosting: winners
Commercial real estate specializing in server farms: winners
Whoever comes up with tractable ways to power all these server farms: winners
AI-enabling hardware companies: winners until the Chinese blockade Taiwan and impose an embargo on raw materials… after that… maybe losers except the ones that have already started diversifying their supply-chains?
Companies which inherently depend on aggregating and reselling labor: tricky, because if they do nothing, they’re toast, but some of them can turn themselves into resellers of AI… e.g. a temp agency rolling out AI services as a cheaper product line
Professional services: same as above but less exposed
Businesses that are needed only in proportion to other businesses having human employees: travel, office real estate, office furniture and supplies: losers
As the effects ripple out and more and more workers are displaced...
Low to mid-end luxury goods and eventually anything that depends on mass discretionary spending: losers
Though what I really would like to do is create some sort of rough model of an individual non-AI company with the following parameters:
Recurring costs attributable to employees
Other recurring costs
Revenue
Fraction of employees whose jobs can be automated at the current state of the art
Variables representing of how far along this company is in planning or implementing AI-driven consolidation and how quickly it is capable of cutting over to AI
Fixed costs of cut-over to AI
Variable costs of cut-over to AI (depending on aggregate workload being automated)
Whatever other variables people who unlike me actually know something about fundamental analysis would put in such a model.
...and then be able to make a principled guess about where on the AI-winners vs AI-losers spectrum a given company is. I even started sketching out a model like this until I realized that someone with relevant expertise must have already written a general-purpose model of this sort and I should find it and adapt it to the AI-automation scenario instead of making up my own.
I’m trying out this strategy on Investopedia’s simulator (https://www.investopedia.com/simulator/trade/options)
The January 15 2027 call options on QQQ look like this as of posting (current price 481.48):
Strike Black-Scholes Ask 485 64.244 77.4 500 57.796 69.83 … … … 675 14.308 14 680 13.693 13.5 685 13.077 12.49 … … … 700 11.446 10.5 … … … 720 9.702 8.5 So, if you were following this strategy and buying today, would you buy 485 because it has the lowest OOM strike price? Would you buy 675 because it’s the lowest strike price where the ask is lower than the theoretical Black-Sholes fair price? Would you go for 720 because it’s the cheapest available? Would you look for the out-of-money option with the largest difference between Black-Sholes and the ask?
What would be your thought process? I’m definitely hoping to hear from @lc but am interested in hearing from anybody who found this line of reasoning worth investigating and has opinions about it.
So, how can we improve this further?
Some things I’m going to look into, please tell me if it’s a waste of time:
Seeing if there are any REITs that specialize in server farms or chip fabs and have long-term options
Apparently McKinsey has a report about what white-collar jobs are most amenable to automation. Tracking down this report (they have lots) if it’s not paywalled or at least learning enough about it to get the gist of which (non-AI) companies would save the most money by “intelligent automation”.
From first principles I’d expect companies/industries which have a large proportion of their operating expenses going to salaries and benefits as the first in line to automate.
Industries that are essentially aggregators and resellers of labor would have to do this to survive at all
...and the ones among them that lag in AI adoption would be candidates for short positions
A risk I see is China blockading Taiwan and/or limiting trade with the US and thus slowing AI development until a new equilibrium is reached through onshoring (and maybe recycling or novel sources of materials or something?)
On the other hand maybe even the current LLMs already have the potential to eliminate millions of jobs and it’s just going to take companies a while to do the planning and integration work necessarily to actually do it.
So one question is, will the resulting increase in revenue offset the revenue losses from a proxy war with China?
I guess scenarios where humans occupy a niche analogous to animals that we don’t value but either cannot exterminate or choose not to.
Parfitt’s Hitchhiker and transparent Newcomb: So is the interest in UDT motivated by the desire for a rigorous theory that explains human moral intuitions? Like, it’s not enough that feelings of reciprocity must have conveyed a selective advantage at the population level, we need to know whether/how they also are net beneficial to the individuals involved?
What should one do if in a Newcomb’s paradox situation but Omega is just a regular dude who thinks they can predict what you will choose, by analysing data from thousands of experiments on e.g. Mechanical Turk?
Do UDT and CDT differ in this case? If they differ then does it depend on how inaccurate Omega’s predictions are and in what direction they are biased?
Thank you for answering.
I’m excluding simulations by construction.
Amnesia: So does UDT roughly-speking direct you to weigh your decisions based on your guesstimate of what decision-relevant facts apply in that scenario? And then choose among available options randomly but weighted by how likely each option is to be optimal in whatever scenario you have actually found yourself in?
Identical copies, (non-identical but very similar players?), players with aligned interests,: I guess this is a special case of dealing with a predictor agent where our predictions of each other’s decisions are likely enough to be accurate that they should be taken into account? So UDT might direct you to disregard causality because you’re confident that the other party will do so the same on their own initiative?
But I don’t understand what this has in common with amnesia scenarios. Is it about disregarding causality?
Non-perfect predictors: Most predictors of anything as complicated as behaviour are VERY imperfect both at the model level and the data-collection level. So wouldn’t the optimal thing to do be to down weigh your confidence in what the other player will do when deciding your own course of action? Unless you have information about how they model you in which case you could try to predict your own behaviour from their perspective?
Are there any practical applications of UDT that don’t depend on uncertainty as to whether or not I am a simulation, nor on stipulating that one of the participants in a scenario is capable of predicting my decisions with perfect accuracy?
I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it’s not my idea. It’s happening with or without me.
My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.
I’m with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don’t want to be rendered redundant.
The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.
Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don’t necessarily need AI/ML and long predate AI/ML.
Then there are auto-suggestor/auto-reminder use-cases: “You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn’t mean to use that more specific code?”
So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It’s the more subtle screw-ups that I’m worried about at the moment.
Definition please.
VNM
The first step is to see a psychiatrist and take the medication they recommend. For me it was an immediate night-and-day difference. I don’t know why the hell I wasted so much of my life before I finally went and got treatment. Don’t repeat my mistake.
Yes, OP
Perhaps we should brainstorm leading indicators of nuclear attack.