I think it’s really stupid that people have to work stupid jobs to do actually valuable things. I also feel somewhat bad about making a critique, as I don’t have money to fund you even if you satisfactorily responded to it. Nonetheless, I feel someone should respond in a little more detail, so as to hopefully set a precedent as to the standards that need to be met before requests for funding are made.
If such theory is important to Friendliness, Eliezer or Marcello should be alerted. If your approach is important to Friendliness, Eliezer or Marcello should convince SIAI to fund you. If Eliezer or Marcello don’t deem your approach worth funding, then to many people that is pretty strong evidence against the merit of your approach. To convince those people you would have to show either where Eliezer or Marcello are wrong in their critique, or where Eliezer or Marcello are likely to go wrong in general when considering potential approaches to Friendliness. Have you tried talking to Eliezer or Marcello? If not, can you provide evidence that they are wrong in deeming it not worth talking about your approach to Friendliness?
For those that are interested in Friendliness but do not think Marcello or Eliezer are likely to notice correct approaches when they see them, you should provide more evidence that your potentially interesting approach will provide solid results and verifiable progress.
I’ve just begun to consider whether a similar method might illuminate problems like self-enhancement, utility function discovery, and utility function renormalization (for concreteness, I plan to work with decision field theory).
Just begun to consider? This doesn’t inspire confidence.
I have ideas about how CEV should work, and about what the true ontology is, and about the adjustments that the latter might require. These ideas are tentative, and open to correction, and the objective is to find out the facts, not just to insist on an opinion.
Philosophical intuition is a really bad determiner for research funds allocation. I have some ideas about ontology too, and I think they’re very clever. They are related to the ideas of Paul Almond who is a very creative thinker and has a great intuition for Occamian reasoning. My metaphysical intuition finds it rather unlikely that string theory would be important for volition extrapolation: in an ensemble universe of hierarchal ontology, computation-specific physical laws are not as important as resolving confusion about things like self-representation and determining languages/prefixes for UTMs: problems that I and more importantly folk at the Singularity Institute are working on (and problems that people from the decision theory workshop sorta flirt with now and then).
I have ideas about how CEV should work, and about what the true ontology is, and about the adjustments that the latter might require.
Have you read and understood Tegmark’s papers about the MUH? Have you read and understood Paul Almond? I’m skeptical that anyone could have ideas about what the true ontology is that bear on CEV. CEV is an impossible problem; impossible in the Yudkowskyan sense of the word, but still impossible. I’ve had many ideas about how to attack it. They don’t work. It’s hard. It’s so hard it’s impossible. When someone says that they have ideas about how CEV should work, I think, ‘this person just doesn’t understand how impossible CEV is’. Do you have evidence that my judgment is wrong enough that others should fund you?
I can’t help but think that focusing on string theory is going down a wrong path, and I’m much more tempted to think that you haven’t found the most important domains for research. Which isn’t really fair, ’cuz it’s damn near impossible to figure it out yourself, and SIAI folk aren’t exactly open about AGI-related research, but there you go. Do you trust that your metaphysical intuition is better than everyone else’s? Do you really think we should trust that? Unless you can make exceptionally strong arguments for doing so, asking for funding is premature.
I do wish more people were working more directly on Friendliness, especially as Eliezer is writing a book right now. But I don’t think anyone can. I’m not sure Eliezer or Marcello can, either, because it’s an impossible problem. But with very, very few exceptions, I don’t think anyone else is anywhere close.
Added: When I get replies like the one I made above, it makes me really depressed, sometimes for days. Even if I think they’re off-base and ill-founded, it feels like someone’s personally attacking me for no good reason. I’m not really sure how to soften the blow… but I thought that such a comment needed to be made. I’m sorry.
Added again: Instead of just being sorry I decided to try to be a little more productive. Hopefully my new post will be at least a little helpful.
I also feel somewhat bad about making a critique, as I don’t have money to fund you even if you satisfactorily responded to it.
But someone else might!
Have you tried talking to Eliezer or Marcello?
I don’t think I’ve ever talked with Marcello. Eliezer I’ve talked with many times but not so much in recent years. My relationship to existing Friendliness theory is that I agree with the overall strategy proposed; for the unsolved subproblems, I have placeholder ideas which I periodically revise; but I’m quite sure that significant portions of it will have to be grounded in a fundamental, subcomputational ontology, because substrate matters for consciousness, and even if an FAI is unconscious, its concept of consciousness needs to be correct.
Talking to Eliezer about these issues is something I save for the future, e.g. after the paper-in-progress is written, because only then will everything about my position be set out clearly and rigorously. But for now, neither of us has a set of ideas in the public domain which is sufficiently exact for a significant exchange to occur.
Just begun to consider? This doesn’t inspire confidence.
I figured that to make this sales pitch, I had better have a line on the computational side of CEV and not just the ontology. Also, the approach of economic doomsday has made me think as hard and fast as possible, since I may not get another chance for some time, and that was the best distillation of my existing ideas I could come up with. CEV involves reflection and computationally difficult tasks, and Mulmuley’s “flip” is a strategy for dealing with this in the context of P vs NP. It is definitely just a placeholder idea, but it has enough relevant complexity that it should be a good starting point if approached in a spirit of critical engagement. A good starting point for me, that is—at this stage I wouldn’t say that everyone else, or even anyone else, should bother with this perspective. To mention it is simply to say that I have a line of thought to pursue.
My metaphysical intuition finds it rather unlikely that string theory would be important for volition extrapolation … I’m skeptical that anyone could have ideas about what the true ontology is that bear on CEV.
Fundamental physical ontology matters for ontology of consciousness, because exact states of consciousness can’t be coarse-grained physical states, unless you want to be a property dualist with a one-to-many mapping. That is an assertion, it has to be backed up with an argument which I won’t repeat right away, but I state it so you can see the relevance. The unconscious information processing of the brain may be understood in functional and coarse-grained terms, but substance (in the most abstracted sense—the “being” of a “thing”), not just causal structure, must matter for conscious states themselves. This is why I take seriously the idea that there is a “Cartesian theater” and that physically it will be something very concrete—see my remark there about entangled excitons. To further understand how this single physical object can be identified with the conscious mind, we would need to understand its exact microphysical constitution, and for that we need string theory or some other fundamental theory—that’s the only place where you’ll find out what an electron actually is. (Then you would need to map the physically described states of this object onto the conscious states.)
The more computer-sciencey issues you mention, like self-representation and description-length epistemology, are also part of the problem, but they will have to be grounded in a deeper ontology …
Have you read and understood Tegmark’s papers about the MUH? Have you read and understood Paul Almond?
… than you can find in Tegmark or Almond. Reifying mathematical objects is not good enough, and neither is a systems hierarchy approach. Ironically, these two thinkers exemplify the two poles of the old opposition between property and substance, universal and particular, mathematics and physics (etc), which is precisely the sort of perennial ontological issue that will need to be dealt with.
When someone says that they have ideas about how CEV should work, I think, ‘this person just doesn’t understand how impossible CEV is’.
It’s the “functionalist” or “computer-science” part of CEV which I think should be solvable just through hard work and systematic labor. For example, inferring the schematic human decision procedure from data about the brain. That’s an exercise in using one finite-state machine (the AI) to infer a particular property of another class of finite-state machines (human brains). That shouldn’t require ontological innovation, just advanced mathematics.
Finding the right ontological grounding of everything is a harder problem from the perspective of method, because it’s not a problem that we already know how to solve, but it should also be a simpler (less laborious) problem, because we have so much of the “data” already—conscious experience is right there in front of us at every moment, and then from science we have endless third-person data on physics and neuroscience. So getting this part right is going to be something like finding the right perspective on a few very fundamental facts.
I therefore agree that CEV is difficult, but perhaps I analyse the difficulty in a different way to you.
I’m not really sure how to soften the blow… but I thought that such a comment needed to be made. I’m sorry.
It didn’t bother me at all. I have far more pressing matters to worry about in my physical life. For some reason I found it grimly amusing to see the post being voted down, down, down… Didn’t Bill Gates say, “640 karma ought to be enough for anybody”? Something like that. Anyway, you did me a favor by replying at such length.
My metaphysical intuition finds it rather unlikely that string theory would be important for volition extrapolation: in an ensemble universe of hierarchal ontology, computation-specific physical laws are not as important as resolving confusion about things like self-representation and determining languages/prefixes for UTMs: problems that I and more importantly folk at the Singularity Institute are working on (and problems that people from the decision theory workshop sorta flirt with now and then).
Flesch-Kincaid grade 37. Congratulations, I don’t think many people regularly pull that off without deliberate intent.
valuable things are, pretty much by definition, things that someone values enough to pay you to do
Perhaps, but only in an extended sense of the word. What if many people are willing to pay you a lot, but those people don’t [yet] exist? Much important number theory that underlies cryptography (and hence our modern economic institutions) was originally developed by mathematicians at a time when nobody valued that particular product very much.
Likewise, how much are people willing to pay for FAI right now? After the advent of FAI, how much will they say we should have valued those efforts? Equally, right before Clippy destroys humanity, how much will the world’s inhabitants regret not having funded work to prevent that particular event?
What if many people are willing to pay you a lot, but those people don’t exist?
I’m a little confused by this—I’m not sure what to make of the word ‘many’ when applied to non-existent people. Do you mean potential future people?
Much important number theory that underlies cryptography (and hence our modern economic institutions) was originally developed by mathematicians at a time when nobody valued that particular product very much.
Nonetheless, it was developed, so somebody valued it enough. There may of course be things which were never developed due to lack of funding which would be very valuable today but equally there are many things into which lots of funding has been sunk to no useful end. If we could reliably tell these apart in advance we would no doubt make significantly faster progress.
What if many people are willing to pay you a lot, but those people don’t exist?
I’m a little confused by this—I’m not sure what to make of the word ‘many’ when applied to non-existent people. Do you mean potential future people?
Yes, I meant “do not exist yet”.
If we could reliably tell these apart in advance we would no doubt make significantly faster progress.
We have heuristics, and they help. We now know that funding basic research that investigates the nature of the world, but doesn’t provide any tangible benefit is worthwhile in the average case. I believe that one of the many factors in the acceleration of increases in technological development is this principle, and as fundamental science funding has increased, we have after-the-fact discovered many important things we ended up needing to know, with much less lag time than decades or centuries ago.
I certainly can’t attribute all that to funding research that has no immediate application, but that heuristic has increased our rate of advancement.
Economically speaking, yeah; but I was using ‘value’ in a more CEV type sense. Even if Mitchell’s ideas are totally confused with a really low probability of working, I think our extrapolated volition would rather not have FAI researchers getting paid less than store clerks, unless they were actively damaging the meme. (Not my downvote.)
I pretty much agreed with the rest of your post. If you believe you can create something that will be valuable (where that something could be knowledge) but you lack the capital to invest in creating that something then you need to raise that capital somehow. Convincing someone to supply the capital is one route, self-funding another. If you can’t make either of these work then you should at least consider the possibility that you are wrong about the future value of what you believe you can create.
As a side note, it is an economic fallacy to suppose that salaries should directly reflect the value created by the worker. The price of labour, like any other price, is determined by supply and demand.
As a side note, it is an economic fallacy to suppose that salaries should directly reflect the value created by the worker. The price of labour, like any other price, is determined by supply and demand.
The word ‘should’ would need to be replaced (or added to) for the supposition to be fallacious. The (rejection of) ‘should’ does not follow from the ‘is’ in the next sentence without including an additional normative premise.
True, perhaps I should have said ‘it is an economic fallacy to suppose that salaries will directly reflect the value created by the worker.’
Assuming our extrapolated volitions understood economics however they would have no reason to care about the relative salaries of FAI researchers and store clerks. Their only concern would be whether FAI researchers were undersupplied at the market price.
As a side note, it is an economic fallacy to suppose that salaries should directly reflect the value created by the worker. The price of labour, like any other price, is determined by supply and demand.
Interesting. If value isn’t determined by supply and demand, what is value? I don’t remember such subtle distinctions from my AP Econ classes.
Interesting. If value isn’t determined by supply and demand, what is value?
I was distinguishing between the price paid for the product a worker produces and the price paid for the worker’s labour. These are both determined by supply and demand but are independent. A factory with large quantities of high-tech equipment may be one of very few factories in the world capable of producing a product. This product may be in high demand but limited in supply due to the capital intensive nature of its production. The sophisticated machinery may only require low skilled labour however and the factory may be located in an area where such labour is amply supplied. In such a situation the price of labour (wages) will be low but the price of the product will be high. Think iPads manufactured in China.
Alternatively, in the case of FAI research, demand may be low due to lack of interest or awareness and also perhaps because it is not a very scalable problem (would Eliezer prefer an army of 10,000 random grad students or 10 geniuses?) At the same time supply may be high since a relatively larger number of people think it would be an interesting or important research project. This may lead to lower wages for an FAI researcher than a store clerk if the economics of store clerks push their wages up even if the FAI researchers ultimately produce great value.
It is also true of course that price (exchange value) and value are not the same thing. If they were we would have no trade. Value is subjective. Trade occurs when both parties ascribe higher utility to the post-trade state of the world than to the pre-trade state of the world. When money is the medium of exchange the price reflects that one person values the money more than the item traded and the other values the item more than the money.
Enlightening, thank you. Do you think that inability to obviously and intuitively make such economic distinctions is likely to hurt my rationality? (That is, would it be better to read a computer programming textbook or a microeconomics textbook if I wanted to be a master rationalist?)
It’s a little difficult for me to give a good answer to this. I feel I reached the point of diminishing returns some time ago with computer programming textbooks (I’m a professional programmer) and have only relatively recently taught myself some economics. Both are valuable to a rationalist but I’m not sure which has higher value. I think learning some basic economics may have more instrumental value than learning programming if you’re not going to make a living as a programmer however.
Convincing someone to supply the capital is one route, self-funding another. If you can’t make either of these work then you should at least consider the possibility that you are wrong about the future value of what you believe you can create.
I would never discourage consideration of the possibility but I note that quite often the conclusion will be ‘I have not found a way to solve the cooperation problem.’
I think it’s really stupid that people have to work stupid jobs to do actually valuable things. I also feel somewhat bad about making a critique, as I don’t have money to fund you even if you satisfactorily responded to it. Nonetheless, I feel someone should respond in a little more detail, so as to hopefully set a precedent as to the standards that need to be met before requests for funding are made.
If such theory is important to Friendliness, Eliezer or Marcello should be alerted. If your approach is important to Friendliness, Eliezer or Marcello should convince SIAI to fund you. If Eliezer or Marcello don’t deem your approach worth funding, then to many people that is pretty strong evidence against the merit of your approach. To convince those people you would have to show either where Eliezer or Marcello are wrong in their critique, or where Eliezer or Marcello are likely to go wrong in general when considering potential approaches to Friendliness. Have you tried talking to Eliezer or Marcello? If not, can you provide evidence that they are wrong in deeming it not worth talking about your approach to Friendliness?
For those that are interested in Friendliness but do not think Marcello or Eliezer are likely to notice correct approaches when they see them, you should provide more evidence that your potentially interesting approach will provide solid results and verifiable progress.
Just begun to consider? This doesn’t inspire confidence.
Philosophical intuition is a really bad determiner for research funds allocation. I have some ideas about ontology too, and I think they’re very clever. They are related to the ideas of Paul Almond who is a very creative thinker and has a great intuition for Occamian reasoning. My metaphysical intuition finds it rather unlikely that string theory would be important for volition extrapolation: in an ensemble universe of hierarchal ontology, computation-specific physical laws are not as important as resolving confusion about things like self-representation and determining languages/prefixes for UTMs: problems that I and more importantly folk at the Singularity Institute are working on (and problems that people from the decision theory workshop sorta flirt with now and then).
Have you read and understood Tegmark’s papers about the MUH? Have you read and understood Paul Almond? I’m skeptical that anyone could have ideas about what the true ontology is that bear on CEV. CEV is an impossible problem; impossible in the Yudkowskyan sense of the word, but still impossible. I’ve had many ideas about how to attack it. They don’t work. It’s hard. It’s so hard it’s impossible. When someone says that they have ideas about how CEV should work, I think, ‘this person just doesn’t understand how impossible CEV is’. Do you have evidence that my judgment is wrong enough that others should fund you?
I can’t help but think that focusing on string theory is going down a wrong path, and I’m much more tempted to think that you haven’t found the most important domains for research. Which isn’t really fair, ’cuz it’s damn near impossible to figure it out yourself, and SIAI folk aren’t exactly open about AGI-related research, but there you go. Do you trust that your metaphysical intuition is better than everyone else’s? Do you really think we should trust that? Unless you can make exceptionally strong arguments for doing so, asking for funding is premature.
I do wish more people were working more directly on Friendliness, especially as Eliezer is writing a book right now. But I don’t think anyone can. I’m not sure Eliezer or Marcello can, either, because it’s an impossible problem. But with very, very few exceptions, I don’t think anyone else is anywhere close.
Added: When I get replies like the one I made above, it makes me really depressed, sometimes for days. Even if I think they’re off-base and ill-founded, it feels like someone’s personally attacking me for no good reason. I’m not really sure how to soften the blow… but I thought that such a comment needed to be made. I’m sorry.
Added again: Instead of just being sorry I decided to try to be a little more productive. Hopefully my new post will be at least a little helpful.
But someone else might!
I don’t think I’ve ever talked with Marcello. Eliezer I’ve talked with many times but not so much in recent years. My relationship to existing Friendliness theory is that I agree with the overall strategy proposed; for the unsolved subproblems, I have placeholder ideas which I periodically revise; but I’m quite sure that significant portions of it will have to be grounded in a fundamental, subcomputational ontology, because substrate matters for consciousness, and even if an FAI is unconscious, its concept of consciousness needs to be correct.
Talking to Eliezer about these issues is something I save for the future, e.g. after the paper-in-progress is written, because only then will everything about my position be set out clearly and rigorously. But for now, neither of us has a set of ideas in the public domain which is sufficiently exact for a significant exchange to occur.
I figured that to make this sales pitch, I had better have a line on the computational side of CEV and not just the ontology. Also, the approach of economic doomsday has made me think as hard and fast as possible, since I may not get another chance for some time, and that was the best distillation of my existing ideas I could come up with. CEV involves reflection and computationally difficult tasks, and Mulmuley’s “flip” is a strategy for dealing with this in the context of P vs NP. It is definitely just a placeholder idea, but it has enough relevant complexity that it should be a good starting point if approached in a spirit of critical engagement. A good starting point for me, that is—at this stage I wouldn’t say that everyone else, or even anyone else, should bother with this perspective. To mention it is simply to say that I have a line of thought to pursue.
Fundamental physical ontology matters for ontology of consciousness, because exact states of consciousness can’t be coarse-grained physical states, unless you want to be a property dualist with a one-to-many mapping. That is an assertion, it has to be backed up with an argument which I won’t repeat right away, but I state it so you can see the relevance. The unconscious information processing of the brain may be understood in functional and coarse-grained terms, but substance (in the most abstracted sense—the “being” of a “thing”), not just causal structure, must matter for conscious states themselves. This is why I take seriously the idea that there is a “Cartesian theater” and that physically it will be something very concrete—see my remark there about entangled excitons. To further understand how this single physical object can be identified with the conscious mind, we would need to understand its exact microphysical constitution, and for that we need string theory or some other fundamental theory—that’s the only place where you’ll find out what an electron actually is. (Then you would need to map the physically described states of this object onto the conscious states.)
The more computer-sciencey issues you mention, like self-representation and description-length epistemology, are also part of the problem, but they will have to be grounded in a deeper ontology …
… than you can find in Tegmark or Almond. Reifying mathematical objects is not good enough, and neither is a systems hierarchy approach. Ironically, these two thinkers exemplify the two poles of the old opposition between property and substance, universal and particular, mathematics and physics (etc), which is precisely the sort of perennial ontological issue that will need to be dealt with.
It’s the “functionalist” or “computer-science” part of CEV which I think should be solvable just through hard work and systematic labor. For example, inferring the schematic human decision procedure from data about the brain. That’s an exercise in using one finite-state machine (the AI) to infer a particular property of another class of finite-state machines (human brains). That shouldn’t require ontological innovation, just advanced mathematics.
Finding the right ontological grounding of everything is a harder problem from the perspective of method, because it’s not a problem that we already know how to solve, but it should also be a simpler (less laborious) problem, because we have so much of the “data” already—conscious experience is right there in front of us at every moment, and then from science we have endless third-person data on physics and neuroscience. So getting this part right is going to be something like finding the right perspective on a few very fundamental facts.
I therefore agree that CEV is difficult, but perhaps I analyse the difficulty in a different way to you.
It didn’t bother me at all. I have far more pressing matters to worry about in my physical life. For some reason I found it grimly amusing to see the post being voted down, down, down… Didn’t Bill Gates say, “640 karma ought to be enough for anybody”? Something like that. Anyway, you did me a favor by replying at such length.
Flesch-Kincaid grade 37. Congratulations, I don’t think many people regularly pull that off without deliberate intent.
Wow. Most people can’t pull that off even when trying!
‘Actually valuable things’ are, pretty much by definition, things that someone values enough to pay you to do (even if that someone is yourself).
Perhaps, but only in an extended sense of the word. What if many people are willing to pay you a lot, but those people don’t [yet] exist? Much important number theory that underlies cryptography (and hence our modern economic institutions) was originally developed by mathematicians at a time when nobody valued that particular product very much.
Likewise, how much are people willing to pay for FAI right now? After the advent of FAI, how much will they say we should have valued those efforts? Equally, right before Clippy destroys humanity, how much will the world’s inhabitants regret not having funded work to prevent that particular event?
EDIT: fixed typos and added a clarifying yet
I’m a little confused by this—I’m not sure what to make of the word ‘many’ when applied to non-existent people. Do you mean potential future people?
Nonetheless, it was developed, so somebody valued it enough. There may of course be things which were never developed due to lack of funding which would be very valuable today but equally there are many things into which lots of funding has been sunk to no useful end. If we could reliably tell these apart in advance we would no doubt make significantly faster progress.
Yes, I meant “do not exist yet”.
We have heuristics, and they help. We now know that funding basic research that investigates the nature of the world, but doesn’t provide any tangible benefit is worthwhile in the average case. I believe that one of the many factors in the acceleration of increases in technological development is this principle, and as fundamental science funding has increased, we have after-the-fact discovered many important things we ended up needing to know, with much less lag time than decades or centuries ago.
I certainly can’t attribute all that to funding research that has no immediate application, but that heuristic has increased our rate of advancement.
Economically speaking, yeah; but I was using ‘value’ in a more CEV type sense. Even if Mitchell’s ideas are totally confused with a really low probability of working, I think our extrapolated volition would rather not have FAI researchers getting paid less than store clerks, unless they were actively damaging the meme. (Not my downvote.)
I pretty much agreed with the rest of your post. If you believe you can create something that will be valuable (where that something could be knowledge) but you lack the capital to invest in creating that something then you need to raise that capital somehow. Convincing someone to supply the capital is one route, self-funding another. If you can’t make either of these work then you should at least consider the possibility that you are wrong about the future value of what you believe you can create.
As a side note, it is an economic fallacy to suppose that salaries should directly reflect the value created by the worker. The price of labour, like any other price, is determined by supply and demand.
The word ‘should’ would need to be replaced (or added to) for the supposition to be fallacious. The (rejection of) ‘should’ does not follow from the ‘is’ in the next sentence without including an additional normative premise.
True, perhaps I should have said ‘it is an economic fallacy to suppose that salaries will directly reflect the value created by the worker.’
Assuming our extrapolated volitions understood economics however they would have no reason to care about the relative salaries of FAI researchers and store clerks. Their only concern would be whether FAI researchers were undersupplied at the market price.
Interesting. If value isn’t determined by supply and demand, what is value? I don’t remember such subtle distinctions from my AP Econ classes.
I was distinguishing between the price paid for the product a worker produces and the price paid for the worker’s labour. These are both determined by supply and demand but are independent. A factory with large quantities of high-tech equipment may be one of very few factories in the world capable of producing a product. This product may be in high demand but limited in supply due to the capital intensive nature of its production. The sophisticated machinery may only require low skilled labour however and the factory may be located in an area where such labour is amply supplied. In such a situation the price of labour (wages) will be low but the price of the product will be high. Think iPads manufactured in China.
Alternatively, in the case of FAI research, demand may be low due to lack of interest or awareness and also perhaps because it is not a very scalable problem (would Eliezer prefer an army of 10,000 random grad students or 10 geniuses?) At the same time supply may be high since a relatively larger number of people think it would be an interesting or important research project. This may lead to lower wages for an FAI researcher than a store clerk if the economics of store clerks push their wages up even if the FAI researchers ultimately produce great value.
It is also true of course that price (exchange value) and value are not the same thing. If they were we would have no trade. Value is subjective. Trade occurs when both parties ascribe higher utility to the post-trade state of the world than to the pre-trade state of the world. When money is the medium of exchange the price reflects that one person values the money more than the item traded and the other values the item more than the money.
Enlightening, thank you. Do you think that inability to obviously and intuitively make such economic distinctions is likely to hurt my rationality? (That is, would it be better to read a computer programming textbook or a microeconomics textbook if I wanted to be a master rationalist?)
It’s a little difficult for me to give a good answer to this. I feel I reached the point of diminishing returns some time ago with computer programming textbooks (I’m a professional programmer) and have only relatively recently taught myself some economics. Both are valuable to a rationalist but I’m not sure which has higher value. I think learning some basic economics may have more instrumental value than learning programming if you’re not going to make a living as a programmer however.
I would never discourage consideration of the possibility but I note that quite often the conclusion will be ‘I have not found a way to solve the cooperation problem.’