I do agree that most things people identify as tenets of bayesianism are useful for thinking about knowledge; but I claim that they would be just as useful, and better-justified, if we forced each one to stand or fall on its own.
This makes me think that you’re (mostly) arguing against ‘Bayesianism’, i.e. effectively requesting that we ‘taboo’ that term and discuss its components (“tenets”) separately.
One motivation for defending Bayesianism itself is that the relevant ideas (“tenets”) are sufficiently entangled that they can or should be considered effectively inseparable.
I also have a sense that the particular means by which intelligent entities like ourselves can, incrementally, approach thinking like an ‘idealized Bayesianism intelligence’ is very different than what you sketched in your dialog. I think a part of that is something like maintaining a ‘network’ of priors and performing (approximate) Bayesian updates on specific ‘modules’ in that network and, more infrequently, propagating updates thru (some portion of) the network. Because of that, I didn’t think this last part of the dialog was warranted:
A: So why do people advocate for the importance of bayesianism for thinking about complex issues if it only works in examples where all the variables are well-defined and have very simple relationships?
B: I think bayesianism has definitely made a substantial contribution to philosophy. It tells us what it even means to assign a probability to an event, and cuts through a lot of metaphysical bullshit.
In my own reasoning, and what I consider to be the best reasoning I’ve heard or read, about the COVID-19 pandemic, Bayesianism seems invaluable. And most of the value is in explicitly considering both evidence and the lack of evidence, how it should be interpreted based on (reasonably) explicit prior beliefs within some specific ‘belief module’, and what updates to other belief modules in the network are warranted. One could certainly do all of that without explicitly believing that Bayesianism is overall effective, but it also seems like a weird ‘epistemological move’ to make.
If you agree that most of the tenets of a big idea are useful (or true) in what important sense is it useful to say you’re against the big idea? Certainly any individual tenet can be more or less useful or true, but in helping one stand or fall on its own, when are you sharpening the big idea versus tearing it down?
This makes me think that you’re (mostly) arguing against ‘Bayesianism’, i.e. effectively requesting that we ‘taboo’ that term and discuss its components (“tenets”) separately.
This is not an unreasonable criticism, but it feels slightly off. I am not arguing against having a bunch of components which we put together into a philosophy with a label; e.g. liberalism is a bunch of different components which get lumped together, and that’s fine. I am arguing that the current way that the tenets of bayesianism are currently combined is bad, because there’s this assumption that they are a natural cluster of ideas that can be derived from the mathematics of Bayes’ rule. It’s specifically discarding this assumption that I think is helpful. Then we could still endorse most of the same ideas as before, but add more in which didn’t have any link to bayes’ rule, and stop privileging bayesianism as a tool for thinking about AI. (We’d also want a new name for this cluster, I guess; perhaps reasonism? Sounds ugly now, but we’d get used to it).
Are you against Bayesianism or ‘Bayesianism’?
This makes me think that you’re (mostly) arguing against ‘Bayesianism’, i.e. effectively requesting that we ‘taboo’ that term and discuss its components (“tenets”) separately.
One motivation for defending Bayesianism itself is that the relevant ideas (“tenets”) are sufficiently entangled that they can or should be considered effectively inseparable.
I also have a sense that the particular means by which intelligent entities like ourselves can, incrementally, approach thinking like an ‘idealized Bayesianism intelligence’ is very different than what you sketched in your dialog. I think a part of that is something like maintaining a ‘network’ of priors and performing (approximate) Bayesian updates on specific ‘modules’ in that network and, more infrequently, propagating updates thru (some portion of) the network. Because of that, I didn’t think this last part of the dialog was warranted:
In my own reasoning, and what I consider to be the best reasoning I’ve heard or read, about the COVID-19 pandemic, Bayesianism seems invaluable. And most of the value is in explicitly considering both evidence and the lack of evidence, how it should be interpreted based on (reasonably) explicit prior beliefs within some specific ‘belief module’, and what updates to other belief modules in the network are warranted. One could certainly do all of that without explicitly believing that Bayesianism is overall effective, but it also seems like a weird ‘epistemological move’ to make.
If you agree that most of the tenets of a big idea are useful (or true) in what important sense is it useful to say you’re against the big idea? Certainly any individual tenet can be more or less useful or true, but in helping one stand or fall on its own, when are you sharpening the big idea versus tearing it down?
This is not an unreasonable criticism, but it feels slightly off. I am not arguing against having a bunch of components which we put together into a philosophy with a label; e.g. liberalism is a bunch of different components which get lumped together, and that’s fine. I am arguing that the current way that the tenets of bayesianism are currently combined is bad, because there’s this assumption that they are a natural cluster of ideas that can be derived from the mathematics of Bayes’ rule. It’s specifically discarding this assumption that I think is helpful. Then we could still endorse most of the same ideas as before, but add more in which didn’t have any link to bayes’ rule, and stop privileging bayesianism as a tool for thinking about AI. (We’d also want a new name for this cluster, I guess; perhaps reasonism? Sounds ugly now, but we’d get used to it).