Let’s make a deal
At the start of 2010, I resolved to focus as much as possible on singularity-relevant issues. That resolution has produced three ongoing projects:
work on a paper
study of string theory
investigation of academic options
As I put it the other day, the paper is about “CEV, adapted to whatever the true ontology is”. I have ideas about how CEV should work, and about what the true ontology is, and about the adjustments that the latter might require. These ideas are tentative, and open to correction, and the objective is to find out the facts, not just to insist on an opinion. Indeed, I would be open to hearing that I ought to be working on something else, if I want to attain maximum relevance to the AI era. But for now, I have my plan, and I take it seriously as a blueprint for what I should be doing.
The relevance of string theory might seem questionable. But it matters for physical ontology and for epistemology of physics [ETA: which matters for general epistemology and hence for AGI]. String theory is also a crossroads for many topics central to pure mathematics, such as algebraic geometry, and their techniques are relevant for many other fields, even discrete ones like computer science. In the theory of complexity classes, there is a self-referential barrier to proving that P is distinct from NP. There is a deep proposal to overcome it by transposing the problem to the domain of algebraic geometry, and I’ve just begun to consider whether a similar method might illuminate problems like self-enhancement, utility function discovery, and utility function renormalization (for concreteness, I plan to work with decision field theory). Also, if I can speak string, maybe I can recruit some of those big brains to the task of FAI.
“Investigation of academic options” should be self-explanatory. A university is one of the few places where you might be able to work full-time on matters like these. Unfortunately, this outcome continues to elude me. So while I set about whipping up a stew of private microloans and casual work so as to keep a roof over my head, it’s time for me to try the Internet option as well.
I find that life costs me AUD$1000/month (AUD is currency code for “Australian dollars”). I’d do better with more, but that’s my minimum budget, the lower bound below which bad things will happen. So that’s also the conversion rate from “money” to “free time”.
I figure that there are three basic forms of cash transaction: gifts, payments, and loans. A gift is unconditional. A payment is traded for services rendered. A loan is a temporary increase in a person’s capital that has to be returned. These categories are not entirely distinct: for example, a payment refunded (because the service wasn’t performed) ends up having functioned as a loan.
I am interested in all three of these things. The brittle arrangement which allows me to speak to you this way does not presently extend to me owning a laptop or having easy access to Skype, but I do have a phone, so the adventurous can call me on +61 0406 979788. (I’m in the eastern Australian timezone.) My email is mporter at gmail.com, and I have a Paypal account under that address.
I think it’s really stupid that people have to work stupid jobs to do actually valuable things. I also feel somewhat bad about making a critique, as I don’t have money to fund you even if you satisfactorily responded to it. Nonetheless, I feel someone should respond in a little more detail, so as to hopefully set a precedent as to the standards that need to be met before requests for funding are made.
If such theory is important to Friendliness, Eliezer or Marcello should be alerted. If your approach is important to Friendliness, Eliezer or Marcello should convince SIAI to fund you. If Eliezer or Marcello don’t deem your approach worth funding, then to many people that is pretty strong evidence against the merit of your approach. To convince those people you would have to show either where Eliezer or Marcello are wrong in their critique, or where Eliezer or Marcello are likely to go wrong in general when considering potential approaches to Friendliness. Have you tried talking to Eliezer or Marcello? If not, can you provide evidence that they are wrong in deeming it not worth talking about your approach to Friendliness?
For those that are interested in Friendliness but do not think Marcello or Eliezer are likely to notice correct approaches when they see them, you should provide more evidence that your potentially interesting approach will provide solid results and verifiable progress.
Just begun to consider? This doesn’t inspire confidence.
Philosophical intuition is a really bad determiner for research funds allocation. I have some ideas about ontology too, and I think they’re very clever. They are related to the ideas of Paul Almond who is a very creative thinker and has a great intuition for Occamian reasoning. My metaphysical intuition finds it rather unlikely that string theory would be important for volition extrapolation: in an ensemble universe of hierarchal ontology, computation-specific physical laws are not as important as resolving confusion about things like self-representation and determining languages/prefixes for UTMs: problems that I and more importantly folk at the Singularity Institute are working on (and problems that people from the decision theory workshop sorta flirt with now and then).
Have you read and understood Tegmark’s papers about the MUH? Have you read and understood Paul Almond? I’m skeptical that anyone could have ideas about what the true ontology is that bear on CEV. CEV is an impossible problem; impossible in the Yudkowskyan sense of the word, but still impossible. I’ve had many ideas about how to attack it. They don’t work. It’s hard. It’s so hard it’s impossible. When someone says that they have ideas about how CEV should work, I think, ‘this person just doesn’t understand how impossible CEV is’. Do you have evidence that my judgment is wrong enough that others should fund you?
I can’t help but think that focusing on string theory is going down a wrong path, and I’m much more tempted to think that you haven’t found the most important domains for research. Which isn’t really fair, ’cuz it’s damn near impossible to figure it out yourself, and SIAI folk aren’t exactly open about AGI-related research, but there you go. Do you trust that your metaphysical intuition is better than everyone else’s? Do you really think we should trust that? Unless you can make exceptionally strong arguments for doing so, asking for funding is premature.
I do wish more people were working more directly on Friendliness, especially as Eliezer is writing a book right now. But I don’t think anyone can. I’m not sure Eliezer or Marcello can, either, because it’s an impossible problem. But with very, very few exceptions, I don’t think anyone else is anywhere close.
Added: When I get replies like the one I made above, it makes me really depressed, sometimes for days. Even if I think they’re off-base and ill-founded, it feels like someone’s personally attacking me for no good reason. I’m not really sure how to soften the blow… but I thought that such a comment needed to be made. I’m sorry.
Added again: Instead of just being sorry I decided to try to be a little more productive. Hopefully my new post will be at least a little helpful.
But someone else might!
I don’t think I’ve ever talked with Marcello. Eliezer I’ve talked with many times but not so much in recent years. My relationship to existing Friendliness theory is that I agree with the overall strategy proposed; for the unsolved subproblems, I have placeholder ideas which I periodically revise; but I’m quite sure that significant portions of it will have to be grounded in a fundamental, subcomputational ontology, because substrate matters for consciousness, and even if an FAI is unconscious, its concept of consciousness needs to be correct.
Talking to Eliezer about these issues is something I save for the future, e.g. after the paper-in-progress is written, because only then will everything about my position be set out clearly and rigorously. But for now, neither of us has a set of ideas in the public domain which is sufficiently exact for a significant exchange to occur.
I figured that to make this sales pitch, I had better have a line on the computational side of CEV and not just the ontology. Also, the approach of economic doomsday has made me think as hard and fast as possible, since I may not get another chance for some time, and that was the best distillation of my existing ideas I could come up with. CEV involves reflection and computationally difficult tasks, and Mulmuley’s “flip” is a strategy for dealing with this in the context of P vs NP. It is definitely just a placeholder idea, but it has enough relevant complexity that it should be a good starting point if approached in a spirit of critical engagement. A good starting point for me, that is—at this stage I wouldn’t say that everyone else, or even anyone else, should bother with this perspective. To mention it is simply to say that I have a line of thought to pursue.
Fundamental physical ontology matters for ontology of consciousness, because exact states of consciousness can’t be coarse-grained physical states, unless you want to be a property dualist with a one-to-many mapping. That is an assertion, it has to be backed up with an argument which I won’t repeat right away, but I state it so you can see the relevance. The unconscious information processing of the brain may be understood in functional and coarse-grained terms, but substance (in the most abstracted sense—the “being” of a “thing”), not just causal structure, must matter for conscious states themselves. This is why I take seriously the idea that there is a “Cartesian theater” and that physically it will be something very concrete—see my remark there about entangled excitons. To further understand how this single physical object can be identified with the conscious mind, we would need to understand its exact microphysical constitution, and for that we need string theory or some other fundamental theory—that’s the only place where you’ll find out what an electron actually is. (Then you would need to map the physically described states of this object onto the conscious states.)
The more computer-sciencey issues you mention, like self-representation and description-length epistemology, are also part of the problem, but they will have to be grounded in a deeper ontology …
… than you can find in Tegmark or Almond. Reifying mathematical objects is not good enough, and neither is a systems hierarchy approach. Ironically, these two thinkers exemplify the two poles of the old opposition between property and substance, universal and particular, mathematics and physics (etc), which is precisely the sort of perennial ontological issue that will need to be dealt with.
It’s the “functionalist” or “computer-science” part of CEV which I think should be solvable just through hard work and systematic labor. For example, inferring the schematic human decision procedure from data about the brain. That’s an exercise in using one finite-state machine (the AI) to infer a particular property of another class of finite-state machines (human brains). That shouldn’t require ontological innovation, just advanced mathematics.
Finding the right ontological grounding of everything is a harder problem from the perspective of method, because it’s not a problem that we already know how to solve, but it should also be a simpler (less laborious) problem, because we have so much of the “data” already—conscious experience is right there in front of us at every moment, and then from science we have endless third-person data on physics and neuroscience. So getting this part right is going to be something like finding the right perspective on a few very fundamental facts.
I therefore agree that CEV is difficult, but perhaps I analyse the difficulty in a different way to you.
It didn’t bother me at all. I have far more pressing matters to worry about in my physical life. For some reason I found it grimly amusing to see the post being voted down, down, down… Didn’t Bill Gates say, “640 karma ought to be enough for anybody”? Something like that. Anyway, you did me a favor by replying at such length.
Flesch-Kincaid grade 37. Congratulations, I don’t think many people regularly pull that off without deliberate intent.
Wow. Most people can’t pull that off even when trying!
‘Actually valuable things’ are, pretty much by definition, things that someone values enough to pay you to do (even if that someone is yourself).
Perhaps, but only in an extended sense of the word. What if many people are willing to pay you a lot, but those people don’t [yet] exist? Much important number theory that underlies cryptography (and hence our modern economic institutions) was originally developed by mathematicians at a time when nobody valued that particular product very much.
Likewise, how much are people willing to pay for FAI right now? After the advent of FAI, how much will they say we should have valued those efforts? Equally, right before Clippy destroys humanity, how much will the world’s inhabitants regret not having funded work to prevent that particular event?
EDIT: fixed typos and added a clarifying yet
I’m a little confused by this—I’m not sure what to make of the word ‘many’ when applied to non-existent people. Do you mean potential future people?
Nonetheless, it was developed, so somebody valued it enough. There may of course be things which were never developed due to lack of funding which would be very valuable today but equally there are many things into which lots of funding has been sunk to no useful end. If we could reliably tell these apart in advance we would no doubt make significantly faster progress.
Yes, I meant “do not exist yet”.
We have heuristics, and they help. We now know that funding basic research that investigates the nature of the world, but doesn’t provide any tangible benefit is worthwhile in the average case. I believe that one of the many factors in the acceleration of increases in technological development is this principle, and as fundamental science funding has increased, we have after-the-fact discovered many important things we ended up needing to know, with much less lag time than decades or centuries ago.
I certainly can’t attribute all that to funding research that has no immediate application, but that heuristic has increased our rate of advancement.
Economically speaking, yeah; but I was using ‘value’ in a more CEV type sense. Even if Mitchell’s ideas are totally confused with a really low probability of working, I think our extrapolated volition would rather not have FAI researchers getting paid less than store clerks, unless they were actively damaging the meme. (Not my downvote.)
I pretty much agreed with the rest of your post. If you believe you can create something that will be valuable (where that something could be knowledge) but you lack the capital to invest in creating that something then you need to raise that capital somehow. Convincing someone to supply the capital is one route, self-funding another. If you can’t make either of these work then you should at least consider the possibility that you are wrong about the future value of what you believe you can create.
As a side note, it is an economic fallacy to suppose that salaries should directly reflect the value created by the worker. The price of labour, like any other price, is determined by supply and demand.
The word ‘should’ would need to be replaced (or added to) for the supposition to be fallacious. The (rejection of) ‘should’ does not follow from the ‘is’ in the next sentence without including an additional normative premise.
True, perhaps I should have said ‘it is an economic fallacy to suppose that salaries will directly reflect the value created by the worker.’
Assuming our extrapolated volitions understood economics however they would have no reason to care about the relative salaries of FAI researchers and store clerks. Their only concern would be whether FAI researchers were undersupplied at the market price.
Interesting. If value isn’t determined by supply and demand, what is value? I don’t remember such subtle distinctions from my AP Econ classes.
I was distinguishing between the price paid for the product a worker produces and the price paid for the worker’s labour. These are both determined by supply and demand but are independent. A factory with large quantities of high-tech equipment may be one of very few factories in the world capable of producing a product. This product may be in high demand but limited in supply due to the capital intensive nature of its production. The sophisticated machinery may only require low skilled labour however and the factory may be located in an area where such labour is amply supplied. In such a situation the price of labour (wages) will be low but the price of the product will be high. Think iPads manufactured in China.
Alternatively, in the case of FAI research, demand may be low due to lack of interest or awareness and also perhaps because it is not a very scalable problem (would Eliezer prefer an army of 10,000 random grad students or 10 geniuses?) At the same time supply may be high since a relatively larger number of people think it would be an interesting or important research project. This may lead to lower wages for an FAI researcher than a store clerk if the economics of store clerks push their wages up even if the FAI researchers ultimately produce great value.
It is also true of course that price (exchange value) and value are not the same thing. If they were we would have no trade. Value is subjective. Trade occurs when both parties ascribe higher utility to the post-trade state of the world than to the pre-trade state of the world. When money is the medium of exchange the price reflects that one person values the money more than the item traded and the other values the item more than the money.
Enlightening, thank you. Do you think that inability to obviously and intuitively make such economic distinctions is likely to hurt my rationality? (That is, would it be better to read a computer programming textbook or a microeconomics textbook if I wanted to be a master rationalist?)
It’s a little difficult for me to give a good answer to this. I feel I reached the point of diminishing returns some time ago with computer programming textbooks (I’m a professional programmer) and have only relatively recently taught myself some economics. Both are valuable to a rationalist but I’m not sure which has higher value. I think learning some basic economics may have more instrumental value than learning programming if you’re not going to make a living as a programmer however.
I would never discourage consideration of the possibility but I note that quite often the conclusion will be ‘I have not found a way to solve the cooperation problem.’
To make a good case for financial support, point to past results that are evidence of clear thinking and of the ability to get research done.
That’s what the paper was for. Unfortunately, I have run out of time.
While I would be interested in contributing to someone’s independent research, a vague post on LW doesn’t come anywhere close to meeting my minimum threshold for confidence to do such a thing. I would recommend that you continue working and spend your free time writing a paper or an intricate analysis or something else to inspire confidence in your potential to contribute significantly to the singularity/AI body of knowledge. Otherwise, why shouldn’t I just donate my money to SIAI?
Sorry, what? You want us to pay you to research string theory because some of the underlying issues are related to the singularity? How do we know you don’t just enjoy researching string theory?
The singularity connections—basic physical ontology, mathematical techniques—are somewhat oblique, though they do exist. But if I ever do get paid to work directly on string theory, it will surely take the form of a research job at a university, rather than charity from a reader of LessWrong. The strings are mentioned in this article because they’re part of what I have been doing, not because they’re the whole of it.
Oh. So what are you asking for money for?
That’s meant to be negotiable. Make me an offer!
Find people at universities with the kind of position you want. Look at their resumes or bios posted online, and see what they have in common.
I have done this; and for CS, what they had in common was having PhD from a famous university.
I hope you find funding somewhere, but I really don’t think grant requests are appropriate for Less Wrong top-level posts.
I would add that such requests would be more appropriate and have more chance for success if some significant posts had been made on the subjects explaining basics and why more knowledge would be useful.
Yes, an open thread would be more appropriate.
My own thoughts on the matter are that requests for funds on LW are not a problem per se, but should be approached cautiously—trust but verify very well-specified projects, both to keep honest people honest*, and to prevent the site from being seen as a money pump from the outside. To that end, I would not be opposed to a request along the lines of “I have a bid from XYZ Inc. to {create X software/do a survey on Y and report the results/produce a short documentary of Z, leading to greater exposure of rationality} and I’m trying to raise $NNNN.00 USD.” I might do so myself at some point in the future, with appropriate documentation and a reasonable chunk of my own change thrown at the issue. (Escrow, presentation of prototypes or proof-of-concepts, etc. are all relevant here)
*I’m being generous. A good chunk of us in the survey basically said “Moral system? What morals?” and no way was this over-reported. Regardless, incentives matter even when you’re the only one around.
You’re referring to the 10.9% of LW survey respondents who owned to not believing in morality, right?
Yes.
I think those results were more indicative of people partaking in philosophical foot-shooting than anything else, but your point still stands.
Well, “doesn’t believe in a system of morality” and “actually acts like a cross between Snidely Whiplash and Lex Luthor” are different things, of course. Which is to say that I think that most people here, even those who don’t have a formal system of morality, probably act in a way that most would regard as moral most of the time. Just a guess, though.
For once, an apropos webcomic link that isn’t XKCD.
(Not actually that rare. SMBC links are reasonably common and there are rather a lot of links to other assorted webcomics.)
Why do you think this has anything to do with trustworthiness? Building and maintaining a reputation for trust can be a valuable strategy independently of any beliefs about morality. I trust my bank to hold my money (not blind trust of course) but I don’t believe my bank (as an institution) has a moral code or any particular pet theories of ethics.
Moral beliefs are only a small part of why someone may act in a trustworthy manner. However, acting in line with one’s sense of ethics would be an additional incentive to deal fairly. No, it’s not the primary force that makes your bank give your money back to you, that being a combination of reputation, as you say, and the rule of law. All else equal, however, I would expect that someone who believes they should return money they borrow to be more likely to repay a loan than someone who doesn’t.
You’ve given a cost estimate for living per month, but no time estimate for products related to the three projects you listed. The Renaissance system of patronage for artists and scientists produced a lot of good stuff, and it would be fun to have “executive producer” credit on a widely-cited String Theory paper (although my own net worth isn’t exactly Medici level (participating in, say, a kickstarter.com project to fund one of these products is certainly on the table)).
I don’t have any projects I need done that can be done remotely, but these people do. Dunno if there’s anything you’d want to do, but it’s there.
I’ll pay you to help me with my algebraic geometry homework. Do you know anything about coarse moduli spaces?
From a physics perspective, I relate more to the geometry than the algebra. That is not a property I’ve cared about before. It does find application in string theory, so we could take it from there, or just talk about stacks and schemes and so forth. But if you want to pursue this, email or PM might be best.
Formatting for top-level posts is different than comments...
Minor note- I haven’t looked at the linked Mulmuley paper (the PDF is failing to open for me) but the abstract (especially point 2) seems to suggest that this paper is if anything a reason to believe that these methods will not work.
Also, as far as I am aware from prior work I’ve seen, there’s no substantial connection between where string theory interacts with algebraic geometry and where complexity theory interacts with algebraic geometry. I’d be very interested in seeing anything that showed a connection. Algebraic geometry is a very large area of math, so the fact that two things happen to connect to it does not mean that the two things are themselves connected.
I don’t know if you mean, won’t work to separate P from NP—Mulmuley’s goal—or won’t work on the subproblems of FAI—which is what I was proposing.
Quantization via brane → symplectic quotients → partial stability.
So they both involve Lie groups. What doesn’t?
Won’t work to separate P from NP.
Thanks., Reading those now.
I should expand on this… Geometric complexity theory is about posing and solving complexity-class problems in a geometric context. For example: Computing the permanent of a matrix is in #P, computing the determinant is in P. Permanents and determinants can be thought of as algebraic subvarieties, and showing that a certain mapping cannot turn determinants into permanents (conjecture 3.2 in the third paper) would show that P is not #P. The idea is to use certain new constructions from mathematical physics (e.g. first paper) to understand such mappings. The space of orbits under the mapping is a quotient of the target space, and there is a history of using techniques from physics to understand these particular quotient spaces. The big development I see brewing is the relation between the “geometric Langlands” program and the quantization of the 5-brane in M-theory, for which a whole new approach to quantization has to be invented. So I’d like to see what happens if you take those new ideas and push them along the chain from first paper to third paper. (But that first paper is just supposed to be representative of the ongoing work, I’m not saying it specifically contains the technically appropriate concepts.)
Meanwhile, for P versus NP, there is a particular barrier to proof which Mulmuley and collaborators hope to get around because of the specialness of the varieties. Right now I’m trying to understand what it is about the algebraic-geometric facts which gives them the right logical and combinatorial properties to evade this “naturalization barrier”. Then I want to look at models of CEV (e.g. for a population of DFT agents) from this perspective, to see if it will help with the difficult problems there (the interplay between self-enhancement, utility function discovery, and utility function renormalization).
Another thing, although they don’t discuss it, it looks like this method also might get around the relativization barrier since the associated varieties when you have an attached oracle are going to look very different.
Yeah, that seems to fit with the impression I got from the papers. I’m not convinced that this can overcome the natural proof barrier but this looks more promising that other attacks I’ve seen. (Unfortunately this is potentially far enough from my own area of expertise that evaluating it in any great detail is probably going to be very difficult.)
I took it to MathOverflow after Witten’s latest paper. It would be crazy if string theory was the key to proving that P is not NP!
I would downvote this twice if I could.
I don’t like it when people say something is bad without saying why.
I don’t think this should’ve been a post—it sounds like a request for money. It’s not organized in a way to make it relevant to other people.
Fair enough. I really don’t want individual people asking for money on LW, though I’m fine with people speaking on behalf of causes; it just makes the incentives that much worse otherwise.
But also, I don’t think anyone should be funding a Penrose-esque qualia mysterian to study string theory.