In both cases you need to make some advance determination about the “ideal” level—of emission quantity in one case and emission price in the other—so I don’t see any obvious disadvantage to cap and trade in this respect.
Ideal emission quantity is a function of cost. Since cost is itself a function of quantity, it all gets very complicated. Ideal price, so long as the cost is an approximately linear function of quantity, which it will tend to be, is much simpler.
You basically have to know the ideal price and the demand curve to figure out how to cap it, but you only need to know the price to find the the tax rate.
If it was something where giving off more than X emissions would cause a runaway greenhouse effect and anything less than that is okay, then you’d cap and trade. But it’s generally not like that.
If it turned out that the price elasticity of carbon emissions was effectively zero, for instance, so that changes in the price had absolutely no effect on the quantity of emissions, then I’m assuming you would regard a carbon tax as pointless.
True, but that would also mean that anything that does work would have a cost far higher than we are willing to pay. The beauty of taxing it is that it makes us decrease CO2 emissions conditional on it being worth while.
I do agree, though, that any actually implemented cap and trade system will fall prey to all kinds of jiggery-pokery due to corporate influence on the government, and will be far from the idealized system that many economists envision.
I never said that. As it is, we’re giving away emissions for no good reason, but we don’t have to. We can let the government have them to begin with, then sell them.
Of course, these kinds of jiggery-poker are a large part of why we used this system. If lobbyists can change who you’re giving emission rights to, they can also convince you it’s a good idea to give people emission rights in the first place.
You basically have to know the ideal price and the demand curve to figure out how to cap it, but you only need to know the price to find the the tax rate.
The situation is entirely symmetric. If quantity is a linear function of cost, then cost is a linear function of quantity. I could easily flip your claim around and say, “You have to know the ideal quantity and the demand curve to figure out the optimal tax rate, but you only need to know the ideal quantity in order to determine the optimal cap.” So I don’t see how this is an argument for the claim that determining ideal price is simpler.
The question is, which is more appropriately regarded as the dependent variable, ideal price or ideal quantity? Of course, neither is completely independent of the other. This is not an acyclic graph, unfortunately. Still, it does seem to me that our notion of the ideal price of emissions is (or at least should be) largely determined by a prior estimation of the quantity of sustainable emissions. It does not seem to me that our notion of ideal quantity should be fixed by some prior estimation of what the ideal price of emissions would be. How would we even come up with such an estimate?
If it was something where giving off more than X emissions would cause a runaway greenhouse effect and anything less than that is okay, then you’d cap and trade. But it’s generally not like that.
My understanding is that climate change does involve tipping points. Of course, it doesn’t follow that “anything less than that is okay”, but I don’t see why cap and trade advocates would have to be committed to that claim. When individuals make decisions about carbon emissions in either the tax or the cap systems, I don’t see their incentives as being significantly different. After all, if an individual (or individual firm) does not use up all of its credits, it could sell them. So in so far as you think taxes will disincentivize individual emissions linearly down to zero, cap and trade would have the same effect.
The difference arises in the aggregate. A capping system allows you to be sensitive to tipping points in a way that a tax does not, at least not without performing the extremely complex task of figuring out what tax rate will ensure that we do not cross the point while at the same time avoiding unnecessary inefficiencies.
I should also note that capping and taxing are not mutually exclusive, though the hope of both being passed in the current American political climate is laughable.
True, but that would also mean that anything that does work would have a cost far higher than we are willing to pay. The beauty of taxing it is that it makes us decrease CO2 emissions conditional on it being worth while.
Given that humans irrationally discount future goods (not to mention that a large number of them have false beliefs about climate change), I don’t think willingness to pay is a useful proxy for genuine worth in this domain. (Of course your point would still hold if the price elasticity of emissions were actually zero, which is why I specified effectively zero. I meant to suggest that price changes do have some effect on consumption but that the effect is negligible when we consider feasible price changes).
My understanding is that climate change does involve tipping points.
But how sure are you of where they are? How sure are you of how much others will pollute? If you’re not very sure, you’re looking at a linearly increasing probability of hitting a tipping point as the amount of emissions increase.
I should also note that capping and taxing are not mutually exclusive, though the hope of both being passed in the current American political climate is laughable.
They do interface a bit strangely. You’d end up with a linear cost (the taxes) until you run into the cap, then the trading will take off and nothing more will be emitted.
I thought about this a bit more and came up with another idea. You could sell the emission rights, and change the price as you go. This would allow you to match it to the actual costs.
You will have to be careful to make it so you’re not practically giving money to the first people that come. You could offer someone a fraction of the revenue to try to maximize the revenue minus the cost curve, so they’d guess the final value and sell it at that uniformly. You would have to be careful about bribery and such.
You could also probably use some kind of prediction market.
Given that humans irrationally discount future goods (not to mention that a large number of them have false beliefs about climate change), I don’t think willingness to pay is a useful proxy for genuine worth in this domain.
There are opportunity costs. If you can only get a 3% return on investment by reducing emissions, but you can get a 5% return on investments somewhere else, you’d be a fool to reduce emissions.
This does bring up the question of how to best encourage investment. The obvious method is to subsidize it. The government could also take a more direct approach, and get rid of the deficit. Once they pay back all their loans, they could actually start investing. This does have a problem though. Once the US has significant investments, whoever controls where they’re invested will be very, very powerful.
We could also legalize long-term investments. Just let a few people set up trusts, wait a few generations, and they will have lots of money they’re investing.
Also, while I personally disagree with the idea of time discounting, it’s not strictly irrational. Caring less about the future is a perfectly valid utility function. Even hyperbolic discounting is. It’s just not the same utility function at every moment in time.
Of course your point would still hold if the price elasticity of emissions were actually zero, which is why I specified effectively zero.
If it were actually zero, making it illegal wouldn’t work, because the cost of going to jail wouldn’t be enough to dissuade people from using oil. If the effect is just negligible, and we have to multiply the price many times over to get the necessary change, it’s probably not worth it. If we weren’t planning on this from the beginning, then it’s almost certainly not worth it.
Also, while I personally disagree with the idea of time discounting, it’s not strictly irrational. Caring less about the future is a perfectly valid utility function. Even hyperbolic discounting is. It’s just not the same utility function at every moment in time.
Hyperbolic discounting leads to preference reversal, which makes the discounter vulnerable to money-pumping. That’s usually taken as a symptom of irrationality around here. Synchronic inconsistency is not necessary for irrationality, diachronic inconsistency works too.
Two different agents can hold two different utility functions. It’s not irrational for me to value different things than you, and it’s similarly not irrational for past!me to value different things than future!me.
I think it’s a mistake to always treat distinct temporal slices of the same person as different agents, since agency is tied up with decision making and decision making is a temporally extended process. I presume you regard intransitive preferences as irrational, but why? The usual rationale is that it turns you into a money pump, but since any realistic money pumping scenario will be temporally extended, it’s unclear why this is evidence for irrationality on your view. If an arbitrageur can make money by engaging in a sequence of trades, each with a different agent, why should any one of those agents be convicted of irrationality?
Anyway, the problem with hyperbolic discounting is not just that the agent’s utility function changes with time. The preference switches are implicit in the agent’s current utility function; they are predictable. As a self-aware hyperbolic discounter, I know right now that I will be willing to make deals in the future that will undo deals I make now and cost me some additional money, and that this condition will persist unless I self-modify, allowing my adversary to pump an arbitrarily large amount of money out of me (or out of my future selves, if you prefer). I will sign a contract right now pledging to pay you $55 next Friday in return for $100 the following Saturday, even though I know right now that when Friday comes around I will be willing to sign a contract paying you $105 on Saturday in exchange for $50 immediately.
since agency is tied up with decision making and decision making is a temporally extended process.
You can make the decision to consider the options and let future!you make a better-informed decision.
I presume you regard intransitive preferences as irrational, but why?
If you prefer paper to rocks, scissors to paper, and rock to scissors, that can be taken advantage of in a single step. If your preferences change, you don’t have intransitive preferences. You do have to take into account that an action changes your preferences, and future!you might not do what you want, as with the murder pill.
The preference switches are implicit in the agent’s current utility function; they are predictable.
They are predictable, but they are not part of the agent’s current utility function. It’s no more irrational than the idea of agents caring more about themselves than each other. An adversary could take advantage of this by setting up a prisoner’s dilemma, just as past! and future! you could be taken advantage of with a prisoner’s dilemma. You might use a decision theory that avoids that, but that’s not the same as changing the utility function.
It’s no more irrational than the idea of agents caring more about themselves than each other.
I don’t get the equivocation of future selves with other agents. Rationality is about winning, but it’s not about your present self winning, it’s about your future selves winning. When you’re engaged in rational decision-making, you’re playing for your future selves. I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now. It would be irrational for me to say “Who cares about that guy’s utility function?”
I don’t get the equivocation of past and future selves with each other.
Rationality is about winning according to some given utility function. Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
Future!you tends to agree with present!you’s values far more often than your closest other allies. As such, an idea of personal identity tends to be useful. It’s not like it’s some fundamental thing that makes you all the same person, though.
I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now.
Present!mangoes are instrumentally useful to make present!you happy. Future!mangoes don’t make present!you or future!you happy, and are therefore not instrumentally helpful. If you thought it was intrinsically valuable that future!you has mangoes, then you would get future!you mangoes regardless of what he thought.
Rationality is about winning according to some given utility function.
Pretty much any sequence of outcomes can be construed as wins according to some utility function. But rationality is not that trivial. If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
There are a number of physical differences between time and space, and these differences are very relevant to the
way organisms have evolved. In particular, they are relevant to the evolution of agency and decision-making. Our tendency to regard all spatially separated organisms as others but certain temporally separated organisms as ourselves is not an arbitrary quirk, it is the consequence of important and fundamental differences between space and time, such as the temporal (but not spatial) asymmetry of causal connections. When we’re talking about decision-making, it is not arbitrary to treat space and time differently.
If everyone who happens to be connected to me-now along a world line didn’t exist, I would not be an agent. There is no sense in which a momentary self (if such an entity is even coherent) would be a decision maker, if it merely appeared and then disappeared instantaneously. On the other hand, if everyone else within my state boundaries disappeared, I would still be an agent. So there is a principled distinction here. Agency (and consequently decision-making) is intimately tied up with the existence of future “selves”. It is not similarly dependent on the existence of spatially separated “selves”.
Future!you tends to agree with present!you’s values far more often than your closest other allies.
Talking of different time slices as distinct selves is a useful heuristic for many purposes, but you’re elevating it to something more fundamental, and that’s a mistake. Every single mental process associated with the generation of self-hood is a temporally extended process. There is no such thing as a genuinely instantaneous self. So when you’re talking about future!me and present!me, you’re already talking about extended segments of world-lines (or world-tubes) rather than points. It is not a difference in kind to talk of a slightly longer segment as single “self”, one that encompasses both future!me and present!me.
If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
No, but you should be able to respond “Well, my actions look irrational according to Steve’s utility function, but you should be evaluating them using my utility function, not his,” or similarly, “Well, my actions look irrational according to future!me’s utility function, but you should be evaluating them using present!me’s utility function, not his,”
Is it future!me’s or future!my? Somehow, my English classes never went into much depth about characterization tags.
Your decisions aren’t totally instantaneous. You depend at on at least a little of future!you and past!you before you could really be thought of as much in the way of a rational agent, but that doesn’t mean that you should think of future!you from an hour later as exactly the same person. It especially doesn’t mean that you after you wake up the next morning is the same as you before you go to sleep. Those two are only vaguely connected.
Well, “only vaguely” is a massive understatement. There’s a helluva lot of mutual information between me tomorrow and me today, much, much more than between me today and you today.
Ideal emission quantity is a function of cost. Since cost is itself a function of quantity, it all gets very complicated. Ideal price, so long as the cost is an approximately linear function of quantity, which it will tend to be, is much simpler.
You basically have to know the ideal price and the demand curve to figure out how to cap it, but you only need to know the price to find the the tax rate.
If it was something where giving off more than X emissions would cause a runaway greenhouse effect and anything less than that is okay, then you’d cap and trade. But it’s generally not like that.
True, but that would also mean that anything that does work would have a cost far higher than we are willing to pay. The beauty of taxing it is that it makes us decrease CO2 emissions conditional on it being worth while.
I never said that. As it is, we’re giving away emissions for no good reason, but we don’t have to. We can let the government have them to begin with, then sell them.
Of course, these kinds of jiggery-poker are a large part of why we used this system. If lobbyists can change who you’re giving emission rights to, they can also convince you it’s a good idea to give people emission rights in the first place.
The situation is entirely symmetric. If quantity is a linear function of cost, then cost is a linear function of quantity. I could easily flip your claim around and say, “You have to know the ideal quantity and the demand curve to figure out the optimal tax rate, but you only need to know the ideal quantity in order to determine the optimal cap.” So I don’t see how this is an argument for the claim that determining ideal price is simpler.
The question is, which is more appropriately regarded as the dependent variable, ideal price or ideal quantity? Of course, neither is completely independent of the other. This is not an acyclic graph, unfortunately. Still, it does seem to me that our notion of the ideal price of emissions is (or at least should be) largely determined by a prior estimation of the quantity of sustainable emissions. It does not seem to me that our notion of ideal quantity should be fixed by some prior estimation of what the ideal price of emissions would be. How would we even come up with such an estimate?
My understanding is that climate change does involve tipping points. Of course, it doesn’t follow that “anything less than that is okay”, but I don’t see why cap and trade advocates would have to be committed to that claim. When individuals make decisions about carbon emissions in either the tax or the cap systems, I don’t see their incentives as being significantly different. After all, if an individual (or individual firm) does not use up all of its credits, it could sell them. So in so far as you think taxes will disincentivize individual emissions linearly down to zero, cap and trade would have the same effect.
The difference arises in the aggregate. A capping system allows you to be sensitive to tipping points in a way that a tax does not, at least not without performing the extremely complex task of figuring out what tax rate will ensure that we do not cross the point while at the same time avoiding unnecessary inefficiencies.
I should also note that capping and taxing are not mutually exclusive, though the hope of both being passed in the current American political climate is laughable.
Given that humans irrationally discount future goods (not to mention that a large number of them have false beliefs about climate change), I don’t think willingness to pay is a useful proxy for genuine worth in this domain. (Of course your point would still hold if the price elasticity of emissions were actually zero, which is why I specified effectively zero. I meant to suggest that price changes do have some effect on consumption but that the effect is negligible when we consider feasible price changes).
But how sure are you of where they are? How sure are you of how much others will pollute? If you’re not very sure, you’re looking at a linearly increasing probability of hitting a tipping point as the amount of emissions increase.
They do interface a bit strangely. You’d end up with a linear cost (the taxes) until you run into the cap, then the trading will take off and nothing more will be emitted.
I thought about this a bit more and came up with another idea. You could sell the emission rights, and change the price as you go. This would allow you to match it to the actual costs.
You will have to be careful to make it so you’re not practically giving money to the first people that come. You could offer someone a fraction of the revenue to try to maximize the revenue minus the cost curve, so they’d guess the final value and sell it at that uniformly. You would have to be careful about bribery and such.
You could also probably use some kind of prediction market.
There are opportunity costs. If you can only get a 3% return on investment by reducing emissions, but you can get a 5% return on investments somewhere else, you’d be a fool to reduce emissions.
This does bring up the question of how to best encourage investment. The obvious method is to subsidize it. The government could also take a more direct approach, and get rid of the deficit. Once they pay back all their loans, they could actually start investing. This does have a problem though. Once the US has significant investments, whoever controls where they’re invested will be very, very powerful.
We could also legalize long-term investments. Just let a few people set up trusts, wait a few generations, and they will have lots of money they’re investing.
Also, while I personally disagree with the idea of time discounting, it’s not strictly irrational. Caring less about the future is a perfectly valid utility function. Even hyperbolic discounting is. It’s just not the same utility function at every moment in time.
If it were actually zero, making it illegal wouldn’t work, because the cost of going to jail wouldn’t be enough to dissuade people from using oil. If the effect is just negligible, and we have to multiply the price many times over to get the necessary change, it’s probably not worth it. If we weren’t planning on this from the beginning, then it’s almost certainly not worth it.
Hyperbolic discounting leads to preference reversal, which makes the discounter vulnerable to money-pumping. That’s usually taken as a symptom of irrationality around here. Synchronic inconsistency is not necessary for irrationality, diachronic inconsistency works too.
Two different agents can hold two different utility functions. It’s not irrational for me to value different things than you, and it’s similarly not irrational for past!me to value different things than future!me.
I think it’s a mistake to always treat distinct temporal slices of the same person as different agents, since agency is tied up with decision making and decision making is a temporally extended process. I presume you regard intransitive preferences as irrational, but why? The usual rationale is that it turns you into a money pump, but since any realistic money pumping scenario will be temporally extended, it’s unclear why this is evidence for irrationality on your view. If an arbitrageur can make money by engaging in a sequence of trades, each with a different agent, why should any one of those agents be convicted of irrationality?
Anyway, the problem with hyperbolic discounting is not just that the agent’s utility function changes with time. The preference switches are implicit in the agent’s current utility function; they are predictable. As a self-aware hyperbolic discounter, I know right now that I will be willing to make deals in the future that will undo deals I make now and cost me some additional money, and that this condition will persist unless I self-modify, allowing my adversary to pump an arbitrarily large amount of money out of me (or out of my future selves, if you prefer). I will sign a contract right now pledging to pay you $55 next Friday in return for $100 the following Saturday, even though I know right now that when Friday comes around I will be willing to sign a contract paying you $105 on Saturday in exchange for $50 immediately.
You can make the decision to consider the options and let future!you make a better-informed decision.
If you prefer paper to rocks, scissors to paper, and rock to scissors, that can be taken advantage of in a single step. If your preferences change, you don’t have intransitive preferences. You do have to take into account that an action changes your preferences, and future!you might not do what you want, as with the murder pill.
They are predictable, but they are not part of the agent’s current utility function. It’s no more irrational than the idea of agents caring more about themselves than each other. An adversary could take advantage of this by setting up a prisoner’s dilemma, just as past! and future! you could be taken advantage of with a prisoner’s dilemma. You might use a decision theory that avoids that, but that’s not the same as changing the utility function.
I don’t get the equivocation of future selves with other agents. Rationality is about winning, but it’s not about your present self winning, it’s about your future selves winning. When you’re engaged in rational decision-making, you’re playing for your future selves. I love mangoes right now, but if I knew for sure that one-minute-in-the-future-me was going to suddenly develop a deep aversion to mangoes, it would be irrational for me to set out to acquire mangoes right now. It would be irrational for me to say “Who cares about that guy’s utility function?”
I don’t get the equivocation of past and future selves with each other.
Rationality is about winning according to some given utility function. Claiming that you have to make everyone who happens to be connected along some world line win is no less arbitrary than claiming that you have to make everyone contained in state boundaries win.
Future!you tends to agree with present!you’s values far more often than your closest other allies. As such, an idea of personal identity tends to be useful. It’s not like it’s some fundamental thing that makes you all the same person, though.
Present!mangoes are instrumentally useful to make present!you happy. Future!mangoes don’t make present!you or future!you happy, and are therefore not instrumentally helpful. If you thought it was intrinsically valuable that future!you has mangoes, then you would get future!you mangoes regardless of what he thought.
Pretty much any sequence of outcomes can be construed as wins according to some utility function. But rationality is not that trivial. If you accuse me of irrationality, I shouldn’t be able to respond by saying “Well, my actions look irrational according to my utility function, but you should be evaluating them using Steve’s utility function, not mine.”
There are a number of physical differences between time and space, and these differences are very relevant to the way organisms have evolved. In particular, they are relevant to the evolution of agency and decision-making. Our tendency to regard all spatially separated organisms as others but certain temporally separated organisms as ourselves is not an arbitrary quirk, it is the consequence of important and fundamental differences between space and time, such as the temporal (but not spatial) asymmetry of causal connections. When we’re talking about decision-making, it is not arbitrary to treat space and time differently.
If everyone who happens to be connected to me-now along a world line didn’t exist, I would not be an agent. There is no sense in which a momentary self (if such an entity is even coherent) would be a decision maker, if it merely appeared and then disappeared instantaneously. On the other hand, if everyone else within my state boundaries disappeared, I would still be an agent. So there is a principled distinction here. Agency (and consequently decision-making) is intimately tied up with the existence of future “selves”. It is not similarly dependent on the existence of spatially separated “selves”.
Talking of different time slices as distinct selves is a useful heuristic for many purposes, but you’re elevating it to something more fundamental, and that’s a mistake. Every single mental process associated with the generation of self-hood is a temporally extended process. There is no such thing as a genuinely instantaneous self. So when you’re talking about future!me and present!me, you’re already talking about extended segments of world-lines (or world-tubes) rather than points. It is not a difference in kind to talk of a slightly longer segment as single “self”, one that encompasses both future!me and present!me.
No, but you should be able to respond “Well, my actions look irrational according to Steve’s utility function, but you should be evaluating them using my utility function, not his,” or similarly, “Well, my actions look irrational according to future!me’s utility function, but you should be evaluating them using present!me’s utility function, not his,”
Is it future!me’s or future!my? Somehow, my English classes never went into much depth about characterization tags.
Your decisions aren’t totally instantaneous. You depend at on at least a little of future!you and past!you before you could really be thought of as much in the way of a rational agent, but that doesn’t mean that you should think of future!you from an hour later as exactly the same person. It especially doesn’t mean that you after you wake up the next morning is the same as you before you go to sleep. Those two are only vaguely connected.
Well, “only vaguely” is a massive understatement. There’s a helluva lot of mutual information between me tomorrow and me today, much, much more than between me today and you today.
Yeah, but there’s no continuity.
What do you mean? The differences between me now and me in epsilon seconds are of order epsilon, aren’t they?