“Physicist motors” makes little sense because that position won out so completely that the alternative is not readily available when we think about “motor design”. But this was not always so! For a long time, wind mills and water wheels were based on intuition.
But in fact one can apply math and physics and take a “physicist motors” approach to motor design, which we see appearing in the 18th and 19th centuries. We see huge improvements in the efficiency of things like water wheels, the invention of gas thermodynamics, steam engines, and so on, playing a major role in the industrial revolution.
The difference is that motor performance is an easy target to measure and understand, and very closely related to what we actually care about (low Goodhart susceptibility). There are a bunch of parameters—cost, efficiency, energy source, size, and so on—but the number of parameters is fairly tractable. So it was very easy for the “physicist motor designers” to produce better motors, convince their customers the motors were better, and win out in the marketplace. (And no need for them to convince anyone who had contrary financial incentives.)
But “discourse” is a much more complex target, with extremely high dimensionality, and no easy way to simply win out in the market. So showing what a better approach looks like takes a huge amount of work and care, not only to develop it, but even to show that it’s better and why.
If you want to find it, the “non-physicist motors” camp is still alive and well, living in the “free energy” niche on YouTube among other places.
You can describe metrics that you think align with success, which can be measured and compared in isolation. If many / most / all such metrics agree, then you’ve probably made progress on discourse as a whole.
Metrics are only useful for comparison if they’re accepted by a sufficient broad cross section of society. Since nearly everyone engages in discourse.
Otherwise the incentive will be for the interlocutor, or groups of interlocutors, to pick a few dozen they selectively prefer out of the possibility space of thousands or millions (?). Which nearly everyone else will ignore.
The parent comment highlighted the fact that certain metrics measuring motor performance are universally, or near universally, agreed upon because they have a direct and obvious relation with the desired outcome. I can’t think of any for discourse that could literally receive 99.XX% acceptance, unlike shaft horsepower or energy consumption.
As someone working on designing better electric motors, I can tell you that “What exactly is this metric I’m trying to optimize for?” is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque. Is that better or worse? There are a ton of things to trade between, and even if your end application is fairly well specified it’s generally not specified well enough to remove all significant ambiguity in which choices are better.
It’s true that there are some motor designs that are just better at everything (or everything one might “reasonably” care about), but that’s true for discourse as well. For example, if you are literally just shrieking at each other, whatever you’re trying to accomplish you can almost certainly accomplish it better by using words—even if you’re still going to scream those words.
The general rule is that if you suck relative to the any nebulosity in where on the pareto frontier you want to be, then there are “objective” gains to be made. In motor, simultaneous improvements in efficiency and power density will go far to create a “better” motor which will be widely recognized as such. In discourse, the ability to create shared understanding and cooperation will go far to create “better” discourse which will be widely regarded as such.
Optimal motors and discourse will look different in different contexts, getting it exactly right for your use case will always be nebulous, and there will always be weird edge cases and people deliberately optimizing for the wrong thing. But it’s really not different in principle.
As someone working on designing better electric motors, I can tell you that “What exactly is this metric I’m trying to optimize for?” is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque.
...
If you meant to reply to my comment, the point was that there is nothing for discourse that’s accepted as widely as torque, magnet strength, copper loss, vibration, etc...
A sufficiently large supermajority of engineering departments on planet Earth can agree with very little effort on how to measure torque, for example. But even this scenario is superfluous because there are international standardization bodies that have literally resolved any conflict in interpretation for the fundamental metrics, like those for velocity, mass, momentum, angular momentum, magnetic strength, etc...
What I’m saying is that as someone whose day job is in large part about designing bleeding edge aerospace motors, I find that the distinction you’re making falls apart pretty quickly in practice when I try to actually design and test a “physics motor”. Even things as supposedly straight forward as “measuring torque” haven’t been as straight forward as you’d expect. A few years ago we took one of our motors to a major aerospace company to test on their dyno and they measured 105% efficiency. The problem was in their torque measurements. We had to get clever in order to come up with better measurements.
Coincidentally, I have also put in a ton of work into figuring out how to engineer discourse, so I also have experience in figuring out what needs to be measured, how it can be measured, and how you can know how far to trust your measurements to validate your theories. Without getting too far into it, you want to start out by calibrating against relatively concrete things like “Can I get this person, who has been saying they want to climb up this wall but are too afraid, to actually climb up the rock wall—yes or no?”. If you can do this reliably where others fail, you know you’re doing something that’s more effective than the baseline (even though that alone doesn’t validate your specific explanation uniquely). It’d take a book to explain how to build from there, but at the end of the day if you can do concrete things that others cannot and you can teach it so that the people you teach can demonstrate the same things, then you’re probably doing something with some validity to it. Probably.
I’m not saying that there’s “no difference” between the process of optimizing discourse and the process of optimizing motors, but it is not nearly as black and white as you think. It’s possible to lead yourself astray with confirmation bias in “discourse” related things, but you should see some of the shit engineers can convince themselves of without a shred of valid evidence. The cognitive skills, nebulosity of metric, and ease of coming up with trustable feedback are all very similar in my experience. More like “a darkish shade of gray” vs “a somewhat darker shade of gray”.
Part of the confusion probably comes from the fact that what we see these days aren’t “physics motors”; they’re “engineering motors”. An engineering motor is when someone who understands physics designs a motor and then engineers populate the world with surface level variations of this blueprint. By and large, my experience in both academic and professional engineering is that engineers struggle to understand and apply first principles and optimize anything outside of the context that was covered in their textbooks. It’s true that within the confines of the textbook, things do get more “cut and dry”, but it’s an illusion that goes away when you look past industry practice to physics itself.
It’s true that our “discourse engineering” department is in a sorry state of being and that the industry guidelines are not to be trusted, but it’s not that we have literally nothing, and our relative lack is not because the subject is “too soft” to get a grip on. Motor design is hard to get a grip on too, when you’re trying to tread even slightly new ground. The problem is that the principles based minds go into physics and sometimes engineering, but rarely psychology. In the few instances where I’ve seen bright minds approach “discourse” with an eye to verifiable feedback, they’ve found things to measure, been able to falsify their own predictions, and have ended up (mostly independently) coming to similar conclusions with demonstrably increased discourse abilities to show for it.
In the few instances where I’ve seen bright minds approach “discourse” with an eye to verifiable feedback, they’ve found things to measure, been able to falsify their own predictions, and have ended up (mostly independently) coming to similar conclusions with demonstrably increased discourse abilities to show for it.
Yes, but it’s worth pointing out what you can actually expect to get from it, and how easily. Most of what I’m talking about is from personal interactions, and the stuff that’s online isn’t like “Oh, the science is unanimous, unarguable and unambiguous”—because we’re talking about the equivalent of “physics motors” not “engineering motors”. Even if our aerospace lab dyno results were publicly available you’d be right not to trust them at face value. If you have a physics degree then saying “Here’s the reasoning, here are the computer simulations and their assumptions, and here’s what our tests have shown so far” is easy. If you can’t distinguish valid physics from “free energy” kookiness, then even though it’s demonstrable and has been demonstrated to those with a good understanding of motor testing validity who have been following this stuff, it’s not necessarily trivial to set up a sufficiently legible demonstration for someone who hasn’t. It’s real, we can get into how I know, but it might not be as easy as you’d like.
The thing that proved to me beyond a shadow of a doubt that there exist bright feedback oriented minds that have developed demonstrable abilities involved talking to one over and over and witnessing the demonstrations first hand as well as the feedback cycles. This guy used to take paying clients for some specific issue they wanted resolved (e.g. “fear of heights”), set concrete testable goals (e.g. “If I climb this specific wall, I will consider our work to have been successful”), and then track his success rate over time and as he changed his methods. He used to rack his brain about what could be causing the behavior he’d see in his failures, come up with an insight that helps to explain, play with it in “role play” until he could anticipate what the likely reactions would be and how to deal with them, and then go test it out with actual clients. And then iterate.
On the “natural discourse, not obviously connected to deliberate cultivation of skill” side, the overarching trajectory of our interactions is itself pretty exceptional. I started out kinda talking shit and dismissing his ideas in a way that would have pissed off pretty much anyone, and he was able to turn that around and end up becoming someone I respect more than just about anyone. On the “clearly the result of iterated feedback, but diverging from natural discourse” side there’s quite a bit, but perhaps the best example is when I tried out his simple protocol for dealing with internal conflicts on physical pain, and it completely changed how I relate to pain to this day. I couldn’t imagine how it could possibly work “because the pain would still be there” so I just did it to see what would happen, and it took about two minutes to go from “I can’t focus at all because this shit hurts” to “It literally does not bother me at all, despite feeling the exact same”. Having that shift of experience, and not even noticing the change as it happened.… was weird.
From there, it was mostly just recognizing the patterns, knowing where to look, and knowing what isn’t actually an extraordinary claim.
This guy does have some stuff online including a description of that protocol and some transcripts, but again, my first reaction to his writings was to be openly dismissive of him so I’m not sure how much it’ll help. And the transcripts are from quite early in his process of figuring things out so it’s a better example of watching the mind work than getting to look at well supported and broadly applicable conclusions. Anyway, the first of his blog posts explaining that protocol is here, and other stuff can be found on the same site.
Another example that stands out to me as exceptionally clear concise and concrete (but pretty far from “natural discourse” towards “mind hack fuckery”) is this demonstration by Steve Andreas of helping a woman get rid of her phobia. In particular, look at the woman’s response and Steve’s response to these responses at 0:39,5:47,6:12,6:22,6:26, and 7:44. The 25 year follow up is neat too.
Metrics are only useful for comparison if they’re accepted by a sufficient broad cross section of society. Since nearly everyone engages in discourse.
I note that “sufficiently broad” might mean something like “most of LessWrong users” or “most people attending this [set of] meetups”. Just as communication is targeted at a particular audience, discourse norms are (presumably) intended for a specific context. That context probably includes things like intended users, audience, goals, and so on. I doubt “rationalist discourse” norms will align well with “televised political debate discourse” norms any time soon.
Nonetheless, I think we can discuss, measure, and improve rationalist discourse norms; and I don’t think we should concern ourselves overly much with how well those norms would work in a presidential debate or a TV ad. I suspect there are still norms that apply very broadly, with broad agreement—but those mostly aren’t the ones we’re talking about here on LessWrong.
“Physicist motors” makes little sense because that position won out so completely that the alternative is not readily available when we think about “motor design”. But this was not always so! For a long time, wind mills and water wheels were based on intuition.
But in fact one can apply math and physics and take a “physicist motors” approach to motor design, which we see appearing in the 18th and 19th centuries. We see huge improvements in the efficiency of things like water wheels, the invention of gas thermodynamics, steam engines, and so on, playing a major role in the industrial revolution.
The difference is that motor performance is an easy target to measure and understand, and very closely related to what we actually care about (low Goodhart susceptibility). There are a bunch of parameters—cost, efficiency, energy source, size, and so on—but the number of parameters is fairly tractable. So it was very easy for the “physicist motor designers” to produce better motors, convince their customers the motors were better, and win out in the marketplace. (And no need for them to convince anyone who had contrary financial incentives.)
But “discourse” is a much more complex target, with extremely high dimensionality, and no easy way to simply win out in the market. So showing what a better approach looks like takes a huge amount of work and care, not only to develop it, but even to show that it’s better and why.
If you want to find it, the “non-physicist motors” camp is still alive and well, living in the “free energy” niche on YouTube among other places.
If discourse has such high dimensionality, compared to motors, how can anyone be confident that any progress has been made at all?
Now, or ever?
You can describe metrics that you think align with success, which can be measured and compared in isolation. If many / most / all such metrics agree, then you’ve probably made progress on discourse as a whole.
Has anyone done this? Because I haven’t seen this done.
Metrics are only useful for comparison if they’re accepted by a sufficient broad cross section of society. Since nearly everyone engages in discourse.
Otherwise the incentive will be for the interlocutor, or groups of interlocutors, to pick a few dozen they selectively prefer out of the possibility space of thousands or millions (?). Which nearly everyone else will ignore.
The parent comment highlighted the fact that certain metrics measuring motor performance are universally, or near universally, agreed upon because they have a direct and obvious relation with the desired outcome. I can’t think of any for discourse that could literally receive 99.XX% acceptance, unlike shaft horsepower or energy consumption.
As someone working on designing better electric motors, I can tell you that “What exactly is this metric I’m trying to optimize for?” is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque. Is that better or worse? There are a ton of things to trade between, and even if your end application is fairly well specified it’s generally not specified well enough to remove all significant ambiguity in which choices are better.
It’s true that there are some motor designs that are just better at everything (or everything one might “reasonably” care about), but that’s true for discourse as well. For example, if you are literally just shrieking at each other, whatever you’re trying to accomplish you can almost certainly accomplish it better by using words—even if you’re still going to scream those words.
The general rule is that if you suck relative to the any nebulosity in where on the pareto frontier you want to be, then there are “objective” gains to be made. In motor, simultaneous improvements in efficiency and power density will go far to create a “better” motor which will be widely recognized as such. In discourse, the ability to create shared understanding and cooperation will go far to create “better” discourse which will be widely regarded as such.
Optimal motors and discourse will look different in different contexts, getting it exactly right for your use case will always be nebulous, and there will always be weird edge cases and people deliberately optimizing for the wrong thing. But it’s really not different in principle.
If you meant to reply to my comment, the point was that there is nothing for discourse that’s accepted as widely as torque, magnet strength, copper loss, vibration, etc...
A sufficiently large supermajority of engineering departments on planet Earth can agree with very little effort on how to measure torque, for example. But even this scenario is superfluous because there are international standardization bodies that have literally resolved any conflict in interpretation for the fundamental metrics, like those for velocity, mass, momentum, angular momentum, magnetic strength, etc...
There’s nothing even close to that for discourse.
I hear what you’re saying.
What I’m saying is that as someone whose day job is in large part about designing bleeding edge aerospace motors, I find that the distinction you’re making falls apart pretty quickly in practice when I try to actually design and test a “physics motor”. Even things as supposedly straight forward as “measuring torque” haven’t been as straight forward as you’d expect. A few years ago we took one of our motors to a major aerospace company to test on their dyno and they measured 105% efficiency. The problem was in their torque measurements. We had to get clever in order to come up with better measurements.
Coincidentally, I have also put in a ton of work into figuring out how to engineer discourse, so I also have experience in figuring out what needs to be measured, how it can be measured, and how you can know how far to trust your measurements to validate your theories. Without getting too far into it, you want to start out by calibrating against relatively concrete things like “Can I get this person, who has been saying they want to climb up this wall but are too afraid, to actually climb up the rock wall—yes or no?”. If you can do this reliably where others fail, you know you’re doing something that’s more effective than the baseline (even though that alone doesn’t validate your specific explanation uniquely). It’d take a book to explain how to build from there, but at the end of the day if you can do concrete things that others cannot and you can teach it so that the people you teach can demonstrate the same things, then you’re probably doing something with some validity to it. Probably.
I’m not saying that there’s “no difference” between the process of optimizing discourse and the process of optimizing motors, but it is not nearly as black and white as you think. It’s possible to lead yourself astray with confirmation bias in “discourse” related things, but you should see some of the shit engineers can convince themselves of without a shred of valid evidence. The cognitive skills, nebulosity of metric, and ease of coming up with trustable feedback are all very similar in my experience. More like “a darkish shade of gray” vs “a somewhat darker shade of gray”.
Part of the confusion probably comes from the fact that what we see these days aren’t “physics motors”; they’re “engineering motors”. An engineering motor is when someone who understands physics designs a motor and then engineers populate the world with surface level variations of this blueprint. By and large, my experience in both academic and professional engineering is that engineers struggle to understand and apply first principles and optimize anything outside of the context that was covered in their textbooks. It’s true that within the confines of the textbook, things do get more “cut and dry”, but it’s an illusion that goes away when you look past industry practice to physics itself.
It’s true that our “discourse engineering” department is in a sorry state of being and that the industry guidelines are not to be trusted, but it’s not that we have literally nothing, and our relative lack is not because the subject is “too soft” to get a grip on. Motor design is hard to get a grip on too, when you’re trying to tread even slightly new ground. The problem is that the principles based minds go into physics and sometimes engineering, but rarely psychology. In the few instances where I’ve seen bright minds approach “discourse” with an eye to verifiable feedback, they’ve found things to measure, been able to falsify their own predictions, and have ended up (mostly independently) coming to similar conclusions with demonstrably increased discourse abilities to show for it.
Can you link to some examples?
Yes, but it’s worth pointing out what you can actually expect to get from it, and how easily. Most of what I’m talking about is from personal interactions, and the stuff that’s online isn’t like “Oh, the science is unanimous, unarguable and unambiguous”—because we’re talking about the equivalent of “physics motors” not “engineering motors”. Even if our aerospace lab dyno results were publicly available you’d be right not to trust them at face value. If you have a physics degree then saying “Here’s the reasoning, here are the computer simulations and their assumptions, and here’s what our tests have shown so far” is easy. If you can’t distinguish valid physics from “free energy” kookiness, then even though it’s demonstrable and has been demonstrated to those with a good understanding of motor testing validity who have been following this stuff, it’s not necessarily trivial to set up a sufficiently legible demonstration for someone who hasn’t. It’s real, we can get into how I know, but it might not be as easy as you’d like.
The thing that proved to me beyond a shadow of a doubt that there exist bright feedback oriented minds that have developed demonstrable abilities involved talking to one over and over and witnessing the demonstrations first hand as well as the feedback cycles. This guy used to take paying clients for some specific issue they wanted resolved (e.g. “fear of heights”), set concrete testable goals (e.g. “If I climb this specific wall, I will consider our work to have been successful”), and then track his success rate over time and as he changed his methods. He used to rack his brain about what could be causing the behavior he’d see in his failures, come up with an insight that helps to explain, play with it in “role play” until he could anticipate what the likely reactions would be and how to deal with them, and then go test it out with actual clients. And then iterate.
On the “natural discourse, not obviously connected to deliberate cultivation of skill” side, the overarching trajectory of our interactions is itself pretty exceptional. I started out kinda talking shit and dismissing his ideas in a way that would have pissed off pretty much anyone, and he was able to turn that around and end up becoming someone I respect more than just about anyone. On the “clearly the result of iterated feedback, but diverging from natural discourse” side there’s quite a bit, but perhaps the best example is when I tried out his simple protocol for dealing with internal conflicts on physical pain, and it completely changed how I relate to pain to this day. I couldn’t imagine how it could possibly work “because the pain would still be there” so I just did it to see what would happen, and it took about two minutes to go from “I can’t focus at all because this shit hurts” to “It literally does not bother me at all, despite feeling the exact same”. Having that shift of experience, and not even noticing the change as it happened.… was weird.
From there, it was mostly just recognizing the patterns, knowing where to look, and knowing what isn’t actually an extraordinary claim.
This guy does have some stuff online including a description of that protocol and some transcripts, but again, my first reaction to his writings was to be openly dismissive of him so I’m not sure how much it’ll help. And the transcripts are from quite early in his process of figuring things out so it’s a better example of watching the mind work than getting to look at well supported and broadly applicable conclusions. Anyway, the first of his blog posts explaining that protocol is here, and other stuff can be found on the same site.
Another example that stands out to me as exceptionally clear concise and concrete (but pretty far from “natural discourse” towards “mind hack fuckery”) is this demonstration by Steve Andreas of helping a woman get rid of her phobia. In particular, look at the woman’s response and Steve’s response to these responses at 0:39,5:47,6:12,6:22,6:26, and 7:44. The 25 year follow up is neat too.
I note that “sufficiently broad” might mean something like “most of LessWrong users” or “most people attending this [set of] meetups”. Just as communication is targeted at a particular audience, discourse norms are (presumably) intended for a specific context. That context probably includes things like intended users, audience, goals, and so on. I doubt “rationalist discourse” norms will align well with “televised political debate discourse” norms any time soon.
Nonetheless, I think we can discuss, measure, and improve rationalist discourse norms; and I don’t think we should concern ourselves overly much with how well those norms would work in a presidential debate or a TV ad. I suspect there are still norms that apply very broadly, with broad agreement—but those mostly aren’t the ones we’re talking about here on LessWrong.