To check understanding: if in the first timeline, we use a radiation that doesn’t exceed double the heteropneum’s EFS, then there remains one timeline. But if we do, there are multiple timelines that aren’t distinguishable … except that the ones with <2x the EFS can’t have been the original timeline, because otherwise there wouldn’t be branching. I guess I’m confused
rk
I’m confused by the predictions of death rates for the global population—seems like that’s what would happen only if the 50% of world population is infected all at once. Is it just exponential growth that’s doing the work there? I’m also confused about how long contagion is well-modelled as exponential
To the extent this is a correct summary, I note that it’s not obvious to me that agents would sharpen their reasoning skills via test cases rather than establishing proofs on bounds of performance and so on. Though I suppose either way they are using logic, so it doesn’t affect the claims of the post
Here is my attempt at a summary of (a standalone part of) the reasoning in this post.
An agent trying to get a lot of reward can get stuck (or at least waste data) when the actions that seem good don’t plug into the parts of the world/data stream that contain information about which actions are in fact good. That is, an agent that restricts its information about the reward+dynamics of the world to only its reward feedback will get less reward
One way an agent can try and get additional information is by deductive reasoning from propositions (if they can relate sense data to world models to propositions). Sometimes the deductive reasoning they need to do will only become apparent shortly before the result of the reasoning is required (so the reasoning should be fast)
The nice thing about logic is that you don’t need fresh data to produce test cases: you can make up puzzles! As an agent will need fast deductive reasoning strategies, they may want to try out the goodness of their reasoning strategies on puzzles they invent (to make sure they’re fast and reliable (if they hadn’t proved reliability))
In general, we should model things that we think agents are going to do, because that gives us a handle on reasoning about advanced agents. It is good to be able to establish what we can about the behaviour of advanced boundedly-rational agents so that we can make progress on the alignment problem etc
Somehow you’re expecting to get a lot of information about task B from performance on task A
Are “A” and “B” backwards here, or am I not following?
is true iff one of (i) is false or (ii) is true. Therefore, if is some true sentence, for any . Here, is .
Most of the rituals were created by individuals that did actually understand the real reasons for why certain things had to happen
This is not part of my interpretation, so I was surprised to read this. Could you say more about why you think this? (Either why you think this being argued for in Vaniver’s / Scott’s posts or why you believe it is fine; I’m mostly interested in the arguments for this claim).
For example, Scott writes:
How did [culture] form? Not through some smart Inuit or Fuegian person reasoning it out; if that had been it, smart European explorers should have been able to reason it out too.
And quotes (either from Scholar’s Stage or The Secret of Our Success):
It’s possible that, with the introduction of rice, a few farmers began to use bird sightings as an indication of favorable garden sites. On-average, over a lifetime, these farmers would do better – be more successful – than farmers who relied on the Gambler’s Fallacy or on copying others’ immediate behavior.
Which, I don’t read as the few farmers knowing why they should use bird sightings.
Or this quote from Xunzi in Vaniver’s post:
One performs divination and only then decides on important affairs. But this is not to be regarded as bringing one what one seeks, but rather is done to give things proper form.
Which doesn’t sound like Xunzi understanding the specific importance of a given divination (I realise Xunzi is not the originator of the divinatory practices)
This link (and the one for “Why do we fear the twinge of starting?”) is broken (I think it’s an admin view?).
Yes, you’re quite right!
The intuition becomes a little clearer when I take the following alternative derivation:
Let us look at the change in expected value when I increase my capabilities. From the expected value stemming from worlds where I win, we have . For the other actor, their probability of winning decreases at a rate that matches my increase in probability of winning. Also, their probability of deploying a safe AI doesn’t change. So the change in expected value stemming fro m worlds where they win is .
We should be indifferent to increasing capabilities when these sum to 0, so .
Let’s choose our units so . Then, using the expressions for from your comment, we have .
Dividing through by we get . Collecting like terms we have and thus . Substituting for we have and thus
It seems like keeping a part ‘outside’ the experience/feeling is a big part for you. Does that sound right? (Similar to the unblending Kaj talks about in his IFS post or clearing a space in Focusing)
Now of course today’s structure/process is tomorrow’s content
Do you mean here that as you progress, you will introspect on the nature of your previous introspections, rather than more ‘object-level’ thoughts and feelings?
I think that though one may use the techniques looking for a solution (which I agree makes them solution-oriented in a sense), it’s not right to so that in, say, Focusing, you introspect on solutions rather than causes. So maybe the difference is more the optimism than the area of focus?
This points to a lot of what the difference feels like to me! It jibes with my intuition for the situation that prompted this question.
I was mildly anxious about something (I forget what), and stopped myself as I was about to move on to some work (in which I would have lost the anxiety). I thought it might be useful to be with the anxiety a bit and see what was so anxious about the situation. This felt like it would be useful, but then I wondered if I would get bad ruminative effects. It seemed like I wouldn’t, but I wasn’t sure why.
I’m not sure if I should be given pause by the fact you say that rumination is concerned with action; my reading of the wikipedia page is that being concerned with action is a big missing feature of rumination
I came back to this post because I was thinking about Scott’s criticism of subminds where he complains about “little people who make you drink beer because they like beer”.
I’d already been considering how your robot model is nice for seeing why something submind-y would be going on. However, I was still confused about thinking about these various systems as basically people who have feelings and should be negotiated with, using basically the same techniques I’d use to negotiate with people.
Revisiting, the “Personalized characters” section was pretty useful. It’s nice to see it more as a claim that ‘[sometimes for some people] internal processes may be represented using social machinery’ than ‘internal agents are like fighting people’.
[Question] When does introspection avoid the pitfalls of rumination?
Not Ben, but I have used X Goodhart more than 20 times (summing over all the Xs)
Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)
I really enjoyed this post and starting with the plausible robot design was really helpful for me accessing the IFS model. I also enjoyed reflecting on your previous objections as a structure for the second part.
The part with repeated unblending sounds reminiscent of the “Clearing a space” stage of Focusing, in which one acknowledges and sets slightly to the side the problems in one’s life. Importantly, you don’t “go inside” the problems (I take ‘going inside’ to be more-or-less experiencing the affect associated with the problems). This seems pretty similar to stopping various protectors from placing negative affect into consciousness.
I noticed something at the end that it might be useful to reflect on: I pattern matched the importance of childhood traumas to woo and it definitely decreased my subjective credence in the IFS model. I’m not sure to what extent I endorse that reaction.
One thing I’d be interested in expansion on: you mention you think that IFS would benefit most people. What do you mean by ‘benefit’ in this case? That it would increase their wellbeing? Their personal efficacy? Or perhaps that it will increase at least one of their wellbeing and personal efficacy but not necessarily both for any given person?
I think this is a great summary (EDIT: this should read “I think the summary in the newsletter was great”).
That said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.
Yes, I agree. The best indicator I had of making a mathematical mistake was whether my intuition agreed in hindsight
Thanks! The info on parasite specificity/history of malaria is really useful.
I wonder if you know of anything specifically about the relative cost-effectiveness of nets for infected people vs uninfected people? No worries if not
I think this can be resolved by working in terms of packages of property (in this case, uninterrupted ownership of the land), where the value can be greater than the sum of its parts. If someone takes a day of ownership, they have to be willing to pay in excess of the difference between “uninterrupted ownership for 5 years” and “ownership for the first 3 years and 2 days”, which could be a lot. Certainly this is a bit of a change from Harberger taxes / needs to allow people to put valuations on extended periods.
It also doesn’t really resolve Gwern’s case below, where the value to an actor of some property might be less than the amount of value they have custody over via that property.