I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last sixteen years I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.
JustinShovelain
Sequential Organization of Thinking: “Six Thinking Hats”
Vote this down for karma balance.
- Mar 14, 2010, 8:23 AM; 9 points) 's comment on Open Thread: March 2010, part 2 by (
Vote this up if you are the oldest child with siblings.
Vote this up if you are an only child.
Vote this up if you have older siblings.
Poll: Do you have older siblings or are an only child?
- Dec 4, 2011, 8:59 PM; 11 points) 's comment on 2011 Survey Results by (
Coffee: When it helps, when it hurts
I’m thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?
Closely related to your point is the paper, “The Epistemic Benefit of Transient Diversity”
It describes and models the costs and benefits of independent invention and transient disagreement.
Meetup: Bay Area: Sunday, March 7th, 7pm
Why are you more concerned about something with unlimited ability to self reflect making a calculation error than about the above being a calculation error? The AI could implement the above if the calculation implicit in it is correct.
What keeps the AI from immediately changing itself to only care about the people’s current utility function? That’s a change with very high expected utility defined in terms of their current utility function and one with little tendency to change their current utility function.
Will you believe that a simple hack will work with lower confidence next time?
I’ll be there.
Hmm, darn. When I write I do have a tendency to see what ideas I meant to describe instead of seeing my actual exposition; I don’t like grammar checking my writing until I’ve had some time to forget details, I read right over my errors unless I pay special attention.
I did have a three LWers look over the article before I sent it and got the general criticism that it was a bit obscure and dense but understandable and interesting. I was probably too ambitious in trying to include everything within one post though, length vs clarity tradeoff.
To address your points:
Have you not felt or encountered people who have the opinion that our life goals may be uncertain, something to have opinions about, and are valid targets for argument? Also, is not uncertainty of our most fundamental goals something we must consider and evaluate (explicitly or implicitly) in order to verify that an artificial intelligence is provably Friendly?
Elaborating on the second statement, when I used “naturalistically” I wished to invoke the idea that the exploration I was doing was similar to classifying animals before we had taxonomies, we look around with our senses (or imagination and inference in this case) and see what we observe and lay no claim to systematic search or analysis. In this context I did a kind of imagination limited shallow search process without trying to systematically relate the concepts (combinatorial explosion and I’m not yet sure how to condense and analyze supergoal uncertainty).
As to the third point, what I did in this article is allocate a name “supergoal uncertainty”, roughly described it in the first paragraph and hopefully brought up the intuition, and then subsequently considered various definitions of “supergoal uncertainty” following from this intuition.
In retrospect, I probably errored on the clarity versus writing time trade-off and was perhaps biased in trying to get this uncomfortable writing task (I’m not a natural writer) off my plate so I can do other things.
I think he meant that even if we are not religious, society tends to pull us into moral realism even though of course moral realism is an illusion.
You are correct, though I don’t go as far as calling moral realism an illusion because of unknown unknowns (though I would be very surprised to find it isn’t illusionary).
Addressing your reification point:
By means of reification something that was previously implicit, unexpressed and possibly unexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation.”—Reification(computer science) from wikipedia.
I don’t think I did abuse vocabulary outside of possibly generalizing meanings in straightforward ways and taking words and meanings common in one topic and using them in a context where they are rather uncommon (e.g. computer science to philosophy). I rely on context to refine and imbue words with meaning instead of focusing on dictionary definitions (to me all sentences take the form of puzzles and words are the pieces; I’ve written more words in proofs than in all other contexts combined). I will try to pay more attention to context invariant meanings in the future. Thanks for the criticism.
Intuitive supergoal uncertainty
Some things I use to test mental ability as well as train it are: BrainWorkshop (A free dualNback program), Cognitivefun.net (A site with assorted tests and profiles including everything from reaction time, to subitizing, to visual backward digit span), Posit Science’s jewel diver demo (a multi-object tracking test), and Lumosity.com (brainshift, memory matrix, speed match, top chimp. All of these tests can be found for free on the internet).
Subjectively the regular use of these tests has increased my metacognitive and self monitoring ability. Anyone have other suggestions? How about tests one can do without the aid of external devices?
In complement to determining whether one’s brain isn’t in its best state there is the question of how to improve or fix it. Keeping with the general spirit of this thread, what are some strategies people use to improve their cognitive functioning (as it pertains to low level properties such as short term memory) in the short term without the use of external aids? A few I use are priming emotional state with posture, expression, and words, doing mental arithmetic, memorizing arbitrary information, and doing the above mental tests.
Interesting idea.
I agree that trusting newly formed ideas is risky, but there are several reasons to convey them anyway (non-comprehensive listing):
To recruit assistance in developing and verifying them
To convey an idea that is obvious in retrospect, an idea you can be confident in immediately
To signal cleverness and ability to think on one’s feet
To socially play with the ideas
What we are really after though is to asses how much weight to assign to an idea off the bat so we can calculate the opportunity costs of thinking about the idea in greater detail and asking for the idea to be fleshed out and conveyed fully. This overlaps somewhat with the confidence (context sensitive rules in determining) with which the speaker is conveying the idea. Also, how do you gauge how old an idea really is? Especially if it condenses gradually or is a simple combination out of very old parts? Still… some metric is better than no metric.