Theoretical Computer Science Msc student at the University of [Redacted] in the United Kingdom.
I’m an aspiring alignment theorist; my research vibes are descriptive formal theories of intelligent systems (and their safety properties) with a bias towards constructive theories.
I think it’s important that our theories of intelligent systems remain rooted in the characteristics of real world intelligent systems; we cannot develop adequate theory from the null string as input.
DragonGod
I have not read all of them!
My current position now is basically:
Actually, I’m less confident and now unsure.
Harth’s framing was presented as an argument re: the canonical Sleeping Beauty problem.
And the question I need to answer is: “should I accept Harth’s frame?”
I am at least convinced that it is genuinely a question about how we define probability.
There is still a disconnect though.
While I agree with the frequentist answer, it’s not clear to me how to backgpropagate this in a Bayesian framework.
Suppose I treat myself as identical to all other agents in the reference class.
I know that my reference class will do better if we answer “tails” when asked about the outcome of the coin toss.
But it’s not obvious to me that there is anything to update from when trying to do a Bayesian probability calculation.
There being many more observers in the tails world to me doesn’t seem to alter these probabilities at all:
P(waking up)
P(being asked questions)
P(...)
By stipulation my observational evidence is the same in both cases.
And I am not compelled by assuming I should be randomly sampled from all observers.
There are many more versions of me in this other world does not by itself seem to raise the probability of me witnessing the observational evidence since by stipulation all versions of me witness the same evidence.
I’m curious how your conception of probability accounts for logical uncertainty?
So in this case, I agree that like if this experiment is repeated multiple times and every Sleeping Beauty version created answered tails, the reference class of Sleeping Beauty agents would have many more correct answers than if the experiment is repeated many times and every sleeping Beauty created answered heads.
I think there’s something tangible here and I should reflect on it.
I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.
And when I was thinking through the question before it was always about trying to answer a question regarding the actual outcome of the coin flip and not what strategy maximises monetary payoffs under even bets.
While I do think that like betting odds isn’t convincing re: actual probabilities because you can just have asymmetric payoffs on equally probable mutually exclusive and jointly exhaustive events, the “reference class of agents being asked this question” seems like a more robust rebuttal.
I want to take some time to think on this.
Strong up voted because this argument actually/genuinely makes me think I might be wrong here.
Much less confident now, and mostly confused.
I mean I am not convinced by the claim that Bob is wrong.
Bob’s prior probability is 50%. Bob sees no new evidence to update this prior so the probability remains at 50%.
I don’t favour an objective notion of probabilities. From my OP:
2. Bayesian Reasoning
Probability is a property of the map (agent’s beliefs), not the territory (environment).
For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
The original 50⁄50 probability should remain unchanged after waking up.
So I am unconvinced by your thought experiments? Observing nothing new I think the observers priors should remain unchanged.
I feel like I’m not getting the distinction you’re trying to draw out with your analogy.
I mean I think the “gamble her money” interpretation is just a different question. It doesn’t feel to me like a different notion of what probability means, but just betting on a fair coin but with asymmetric payoffs.
The second question feels closer to actually an accurate interpretation of what probability means.
[Question] Change My Mind: Thirders in “Sleeping Beauty” are Just Doing Epistemology Wrong
i.e. if each forecaster has an first-order belief , and is your second-order belief about which forecaster is correct, then should be your first-order belief about the election.
I think there might be a typo here. Did you instead mean to write: “” for the second order beliefs about the forecasters?
The claim is that given the presence of differential adversarial examples, the optimisation process would adjust the parameters of the model such that it’s optimisation target is the base goal.
That was it, thanks!
Probably sometime last year, I posted on Twitter something like: “agent values are defined on agent world models” (or similar) with a link to a LessWrong post (I think the author was John Wentworth).
I’m now looking for that LessWrong post.
My Twitter account is private and search is broken for private accounts, so I haven’t been able to track down the tweet. If anyone has guesses for what the post I may have been referring to was, do please send it my way.
Most of the catastrophic risk from AI still lies in superhuman agentic systems.
Current frontier systems are not that (and IMO not poised to become that in the very immediate future).
I think AI risk advocates should be clear that they’re not saying GPT-5/Claude Next is an existential threat to humanity.
[Unless they actually believe that. But if they don’t, I’m a bit concerned that their message is being rounded up to that, and when such systems don’t reveal themselves to be catastrophically dangerous, it might erode their credibility.]
Immigration is such a tight constraint for me.
My next career steps after I’m done with my TCS Masters are primarily bottlenecked by “what allows me to remain in the UK” and then “keeps me on track to contribute to technical AI safety research”.
What I would like to do for the next 1 − 2 years (“independent research”/ “further upskilling to get into a top ML PhD program”) is not all that viable a path given my visa constraints.
Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship.
[I’m not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.]
Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research.
The main detriment of that is I’d have to spend N more years in <city> and I was really hoping to come down to London.
Advice very, very welcome.
[Not sure who to tag.]
Specifically, the experiments by Morrison and Berridge demonstrated that by intervening on the hypothalamic valuation circuits, it is possible to adjust policies zero-shot such that the animal has never experienced a previously repulsive stimulus as pleasurable.
I find this a bit confusing as worded, is something missing?
Does anyone know a ChatGPT plugin for browsing documents/webpages that can read LaTeX?
The plugin I currently use (Link Reader) strips out the LaTeX in its payload, and so GPT-4 ends up hallucinating the LaTeX content of the pages I’m feeding it.
How frequent are moderation actions? Is this discussion about saving moderator effort (by banning someone before you have to remove the rate-limited quantity of their bad posts), or something else? I really worry about “quality improvement by prior restraint”—both because low-value posts aren’t that harmful, they get downvoted and ignored pretty easily, and because it can take YEARS of trial-and-error for someone to become a good participant in LW-style discussions, and I don’t want to make it impossible for the true newbies (young people discovering this style for the first time) to try, fail, learn, try, fail, get frustrated, go away, come back, and be slightly-above-neutral for a bit before really hitting their stride.
I agree with Dagon here.
Six years ago after discovering HPMOR and reading part (most?) of the Sequences, I was a bad participant in old LW and rationalist subreddits.
I would probably have been quickly banned on current LW.
It really just takes a while for people new to LW like norms to adjust.
I find noticing surprise more valuable than noticing confusion.
Hindsight bias and post hoc rationalisations make it easy for us to gloss over events that were apriori unexpected.
I think the model of “a composition of subagents with total orders on their preferences” is a descriptive model of inexploitable incomplete preferences, and not a mechanistic model. At least, that was how I interpreted “Why Subagents?”.
I read @johnswentworth as making the claim that such preferences could be modelled as a vetocracy of VNM rational agents, not as claiming that humans (or other objects of study) are mechanistically composed of discrete parts that are themselves VNM rational.
I’d be more interested/excited by a refutation on the grounds of: “incomplete inexploitable preferences are not necessarily adequately modelled as a vetocracy of parts with complete preferences”. VNM rationality and expected utility maximisation is mostly used as a descriptive rather than mechanistic tool anyway.
Oh, do please share.
Yeah, since posting this question:
I had a firm notion in mind for what I thought probability meant. But Rafael Harth’s answer really made me unconfident that the notion I had in mind was the right notion of probability for the question.