A/Cs primarily work by using electricity to drive a pressure differential between the cool, low-pressure indoor refrigerant and the hot, high-pressure outdoor refrigerant. It’s not just moving air around. PV = nRT! Here’s a video explainer.
Read carefully, the post doesn’t ignore the effect of the evaporator and condenser...
Here’s how this air conditioner works. It sucks in some air from the room. It splits that air into two streams, and pumps heat from one stream to the other—making some air hotter, and some air cooler. The cool air, it blows back into the room. The hot air, it blows out the window.
… But it is written in such a way that the reader might come away with the impression that the single-hose A/C has zero net effect on the household temperature. Even the edited-in caveat makes it sound like it might be cooling off the room in which it’s located, at the expense of heating up the rest of the house.
The actual effect of this air conditioner is to make the space right in front of the air conditioner nice and cool, but fill the rest of the house with hot outdoor air. Probably not what one wants from an air conditioner!...
… I want to clarify that it will still cool down a room on net. If the air inside is all perfectly mixed together, it will still end up cooler with the air conditioner than without. The point is not that it doesn’t work at all. The point is that it’s stupidly inefficient in a way which I do not think consumers would plausibly choose over the relatively-low cost of a second hose if they recognized the problems.
This reading is reinforced by using the A/C as an analogy for a truly zero-value or destructive AI:
It’s “Potemkin village world”: a world designed to look amazing, but with nothing behind the facade. Maybe not even any living humans behind the facade—after all, even generally-happy real humans will inevitably sometimes appear less-than-maximally “good”.
We’d need to imagine an A/C that does nothing to net temperature, or that actively heats up the house on net for this analogy to work. Given that I expect more readers here will know about this hypothesis than about the practical details of how an A/C work, I worry they’re more likely to see AI as a metaphor for this A/C than this A/C as a metaphor for AI!
Note also that regulation could totally fix this particular problem. We could ban single-hose A/Cs; there’s a whole nation of HVAC experts who could convey this information, and they’re licensed in the USA, so there’s already a legal framework for identifying the relevant experts.
Waiting also might fix the problem, especially if these people have metered electricity. It’s easily possible that they’ll notice their high summer electric bill, consider efficiency improvements, look into the A/C, do 10 seconds of research, and invest in the two-hose unit the next time around.
When discussing AI, it seems valuable to distinguish more clearly between three scenarios:
Individual AI products truly analogous to an A/C. They are specific services, which can indeed be more or less efficient, and can be chosen badly by ill-informed consumers. We might handle these in a similar way to how conventional products are regulated.
An AI-driven world in which human decisions made in collaboration with individual AI products drive Molochian metrics at the expense of actual wellbeing. Bad results, but debuggable, and occurring for essentially the same reason as all the other institutional coordination failures we’re already dealing with.
A fast-takeoff superintelligent AI that is actively attempting to seize control of the observable universe in order to maximize paperclip production. I think this is not analogous to A/C, partly because it assumes that the AI itself isn’t Goodharting itself as it chases world domination. And, come to think of it, I haven’t seen this possibility discussed before (which means next to nothing, I am not well informed). Why wouldn’t a self-improving AI have the same Goodharting problem in achieving its world-domination designs that we face in creating that AI in the first place?
A/Cs primarily work by using electricity to drive a pressure differential between the cool, low-pressure indoor refrigerant and the hot, high-pressure outdoor refrigerant. It’s not just moving air around. PV = nRT! Here’s a video explainer.
Read carefully, the post doesn’t ignore the effect of the evaporator and condenser...
… But it is written in such a way that the reader might come away with the impression that the single-hose A/C has zero net effect on the household temperature. Even the edited-in caveat makes it sound like it might be cooling off the room in which it’s located, at the expense of heating up the rest of the house.
This reading is reinforced by using the A/C as an analogy for a truly zero-value or destructive AI:
We’d need to imagine an A/C that does nothing to net temperature, or that actively heats up the house on net for this analogy to work. Given that I expect more readers here will know about this hypothesis than about the practical details of how an A/C work, I worry they’re more likely to see AI as a metaphor for this A/C than this A/C as a metaphor for AI!
Note also that regulation could totally fix this particular problem. We could ban single-hose A/Cs; there’s a whole nation of HVAC experts who could convey this information, and they’re licensed in the USA, so there’s already a legal framework for identifying the relevant experts.
Waiting also might fix the problem, especially if these people have metered electricity. It’s easily possible that they’ll notice their high summer electric bill, consider efficiency improvements, look into the A/C, do 10 seconds of research, and invest in the two-hose unit the next time around.
When discussing AI, it seems valuable to distinguish more clearly between three scenarios:
Individual AI products truly analogous to an A/C. They are specific services, which can indeed be more or less efficient, and can be chosen badly by ill-informed consumers. We might handle these in a similar way to how conventional products are regulated.
An AI-driven world in which human decisions made in collaboration with individual AI products drive Molochian metrics at the expense of actual wellbeing. Bad results, but debuggable, and occurring for essentially the same reason as all the other institutional coordination failures we’re already dealing with.
A fast-takeoff superintelligent AI that is actively attempting to seize control of the observable universe in order to maximize paperclip production. I think this is not analogous to A/C, partly because it assumes that the AI itself isn’t Goodharting itself as it chases world domination. And, come to think of it, I haven’t seen this possibility discussed before (which means next to nothing, I am not well informed). Why wouldn’t a self-improving AI have the same Goodharting problem in achieving its world-domination designs that we face in creating that AI in the first place?