Fields related to Friendliness philosophy
I just realized that my ‘Interests’ list on The Facebook made an okay ad hoc list of fields potentially related to Friendliness-like philosophy. They sorta kinda flow into each other by relatedness and are not particularly prioritized. This is my own incomplete list, though it was largely inspired by many conversations with Singularity Institute folk. Plus signs mean I’m moderately more certain that the existing field (or some rationalist re-interpretation of it) has useful insights.
Friendliness philosophy:
+Epistemology (formal, Bayesian, reflective, group)
Axiology
+Singularity (seed AI, universal AI drives, neuromorphic/emulation/de novo/kludge timelines, etc)
+Cosmology (Tegmark-like stuff, shake vigorously with decision theory, don’t get attached to ontologies/intuitions)
Physics (Quantum MWI, etc)
Metaphysics
+Ontology of agency (Yudkowsky (kind of), Parfit, Buddha; limited good condensed stuff seemingly)
+Ontology (probably grounded in algorithmic information theory / theoretical computer science ideas)
Ontologyology (abstract Turing equivalence, et cetera)
+Metaphilosophy (teaching ourselves to teach an AI to do philosophy)
+Cognitive science (computational cognitive science especially)
Neuroscience (affective neuroscience)
Machine learning (reinforcement learners, Monte Carlo)
+Computer science (super theoretical)
+Algorithmic probability theory (algorithmic information theory, universal induction, etc)
+Decision theory (updateless-like)
Optimal control theory (stochastic, distributed; interestingly harder than it looks)
+Bayesian probability theory (for building intuitions, mostly, but generally useful)
Rationality
Dynamical systems (attractors, stability)
+Complex systems (multilevel selection, hierarchical stuff, convergent patterns / self-similarity)
Cybernetics (field kind of disintegrated AFAIK, complex systems took over)
Microeconomics (AGI negotiation stuff, human preference negotiation at different levels of organization)
+Meta-ethics (Bostrom)
Morality (Parfit)
Moral psychology
Evolutionary game theory
+Evolutionary psychology (where human preferences come from (although again, universal/convergent patterns))
+Evolutionary biology (how preferences evolve, convergent features, etc)
Evolutionary developmental biology
Dual inheritance theory (where preferences come from, different ontology and level of organization, see also memetics)
Computational sociology (how cultures’ preferences change over time)
Epidemiology (for getting intuitions about how beliefs/preferences (memes) spread)
Aesthetics (elegance, Occam-ness, useful across many domains)
Buddhism (Theravada, to a lesser extent Zen; basically rationality with a different ontology and more emphasis on understanding oneself/onenotself)
Jungian psychology (mostly archetypes)
Psychoanalysis (id/ego/super-ego, defense mechanisms)
Transpersonal psychology (Maslow’s hierarchy, convergent spiritual experiences, convergent superstimuli for reinforcement learners, etc)
Et cetera
I would rather you didn’t recommend that people study things that are wrong—not the best use of their time. I’m referring especially to the parts of psychology that consisted of people just making up whatever sounded good.
Your focus on ontology and meta-ontology is interesting, could you explain more how it’s related to friendliness?
I quite obviously don’t think that they’re wrong.
It seems that a large part of what makes Steve Rayhawk so awesome is that he can make insightful connections between disparate fields by way of reasoning about them in the terms of a larger and consistent framework. Same goes for e.g. Michael Vassar and Peter de Blanc. That said, it’s probable that their ontologies don’t carve reality at its joints in the way that would be most conducive to reasoning about Friendliness… and most rationalists I talk to just seem to lack a coherent ontology entirely, which makes it damn hard to propagate belief updates between domains, and hard to see potential patterns or hypotheses that suggest themselves. (Think of the state of what should have been known as evolutionary biology, before Darwin discovered it.) It seems like it’d be useful to better understand what’s going into how they managed to construct their ontologies (and metaontologies). It’s also confusing that ontology has become so tied up with algorithmic probability theoretic cosmology and what not. Meanwhile we’re still using words like ‘reality fluid’ while trusting our Occamian intuitions about which ontologies are elegant.
I, for one, have never in my life used the words “reality fluid.”
Well, now I have. :D
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
No, you mentioned them.
Pah, a trifle.
I think you partly mean different things by “wrong”. Two contradictory models can each make lots of reliably correct predictions or find lots of worthwhile insights, even if one or both make false fundamental assumptions or ontological claims. (It’s easy to focus on supernatural ontological claims as falsifying a model, but they usually don’t invalidate, or have much effect on, its predictions (though they do hold back expansion and integration of models).)
I suspect you and Will have different definitions of “wrong”. It seems obvious that, even if two theories are mutually exclusive taken as wholes, each one could contain some unique useful observations and concepts (even if one or both theories make some dead-wrong assumptions or false claims of ontological specialness).
Can you give some examples of this, or maybe even write a post on the topic? I’m still really fuzzy as to what you’re talking about.
Your list basically includes every field, except maybe crocheting, although I’m not sure you don’t want to include that one as well.
It doesn’t include useless things, like math. For an idea of all the things I left out: http://en.wikipedia.org/wiki/Fields_of_science
That seems to be the thing that Eliezer most often mentions when he is considering what he needs to learn more of in order to further his own friendliness research!
Friendliness philosophy and FAI research are two different domains, though I was being somewhat facetious when disparaging math. It does seem as if math will end up being important for tackling increasingly meta decision theory problems, which might be most of Friendliness philosophy.
Which are, nevertheless, more closely related than friendliness and chakras. It is also fundamentally necessary for many of the most useful fields that you mention. There is no way math deserves to be left out.
I listed all the math I know of that seems relevant to Friendliness philosophy and isn’t particularly obscure (like metacompilation). What did I miss?
It seems like algorithmic probability theory and complex systems are the most conceptually rich fields I listed.
I would remove from the list before adding to it. I also wouldn’t make the claim:
This seems too broad. I’m aware that friendliness isn’t an easy task, but wouldn’t a narrower list with the 10-20 most important things be more useful?
I’ll put a plus sign next to fields I’m more confident are potentially useful.
Perhaps consider sorting on column 0.
How deep have you gotten into Axiology?
I took two quizes almost a year ago that I vaguely remember as being something worth taking before diving deeper into the content. I recall that the first quiz (ordering the goodness or badness of things) said I was very coherent while the second (environment and work process) said I was a disaster. That seemed like a fair assessment at the time and was part of the impetus for optimizing my work habits and areas, but I never went back to study the content of the field.
Could you summarize your readings in the area a bit, or say what the good or bad parts seem to be? More speculatively, what do you think the applications to Friendliness are?