I don’t think bayesianism gives you particular insight into that for the same reasons I don’t think it gives you particular insight into human cognition
In the areas I focus on, at least, I wouldn’t know where to start if I couldn’t model agents using Bayesian tools. Game-theoretic concepts like social dilemma, equilibrium selection, costly signaling, and so on seem indispensable, and you can’t state these crisply without a formal model of preferences and beliefs. You might disagree that these are useful concepts, but at this point I feel like the argument has to take place at the level of individual applications of Bayesian modeling, rather than a wholesale judgement about Bayesianism.
misleading concepts like “boundedly rational” (compare your claim with the claim that a model in which all animals are infinitely large helps us identify properties that are common to “boundedly sized” animals)
I’m not saying that the idealized model helps us identify properties common to more realistic agents just because it’s idealized. I agree that many idealized models may be useless for their intended purpose. I’m saying that, as it happens, whenever I think of various agentlike systems it strikes me as useful to model those systems in a Bayesian way when reasoning about some of their aspects—even though the details of their architectures may differ a lot.
I didn’t quite understand why you said “boundedly rational” is a misleading concept, I’d be interested to see you elaborate.
if we have no good reason to think that explicit utility functions are something that is feasible in practical AGI
I’m not saying that we should try to design agents who are literally doing expected utility calculations over some giant space of models all the time. My suggestion was that it might be good—for the purpose of attempting to guarantee safe behavior—to design agents which in limited circumstances make decisions by explicitly distilling their preferences and beliefs into utilities and probabilities. It’s not obvious to me that this is intractable. Anyway, I don’t think this point is central to the disagreement.
Game-theoretic concepts like social dilemma, equilibrium selection, costly signaling, and so on seem indispensable
I agree with this. I think I disagree that “stating them crisply” is indispensable.
I wouldn’t know where to start if I couldn’t model agents using Bayesian tools.
To be a little contrarian, I want to note that this phrasing has a certain parallel with the streetlight effect: you wouldn’t know how to look for your keys if you didn’t have the light from the streetlamp. In particular, this is also what someone would say if we currently had no good methods for modelling agents, but bayesian tools were the ones which seemed good.
Anyway, I’d be interested in having a higher-bandwidth conversation with you about this topic. I’ll get in touch :)
In the areas I focus on, at least, I wouldn’t know where to start if I couldn’t model agents using Bayesian tools. Game-theoretic concepts like social dilemma, equilibrium selection, costly signaling, and so on seem indispensable, and you can’t state these crisply without a formal model of preferences and beliefs. You might disagree that these are useful concepts, but at this point I feel like the argument has to take place at the level of individual applications of Bayesian modeling, rather than a wholesale judgement about Bayesianism.
I’m not saying that the idealized model helps us identify properties common to more realistic agents just because it’s idealized. I agree that many idealized models may be useless for their intended purpose. I’m saying that, as it happens, whenever I think of various agentlike systems it strikes me as useful to model those systems in a Bayesian way when reasoning about some of their aspects—even though the details of their architectures may differ a lot.
I didn’t quite understand why you said “boundedly rational” is a misleading concept, I’d be interested to see you elaborate.
I’m not saying that we should try to design agents who are literally doing expected utility calculations over some giant space of models all the time. My suggestion was that it might be good—for the purpose of attempting to guarantee safe behavior—to design agents which in limited circumstances make decisions by explicitly distilling their preferences and beliefs into utilities and probabilities. It’s not obvious to me that this is intractable. Anyway, I don’t think this point is central to the disagreement.
I agree with this. I think I disagree that “stating them crisply” is indispensable.
To be a little contrarian, I want to note that this phrasing has a certain parallel with the streetlight effect: you wouldn’t know how to look for your keys if you didn’t have the light from the streetlamp. In particular, this is also what someone would say if we currently had no good methods for modelling agents, but bayesian tools were the ones which seemed good.
Anyway, I’d be interested in having a higher-bandwidth conversation with you about this topic. I’ll get in touch :)