I am Issa Rice. https://issarice.com/
riceissa
Did you end up running it through your internal infohazard review and if so what was the result?
You have my permission!
I see, thank you for the response!
Exposition as science: some ideas for how to make progress
For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or “correct course” based on student confusion. This ability to “use a knowledgeable human” in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably “copy” this knowledge to other humans? -- that’s the exposition problem. (This framing is basically Bloom’s 2 sigma problem.)
That’s very exciting to me! I personally study how science worked and failed historically and epistemic progress and vigilance in general to make alignment go faster and better, so I’ll be interested to discuss exposition as a science with you (and maybe give feedback on your follow-up posts if you want. ;) )
Cool! I just shared my draft post with you that goes into detail about the “exposition as science” strategy (ETA for everyone else: the post has now been published); if that post seems interesting to you, I’d be happy to discuss more with you (or you can just leave comments on the post if that is easier).
Doesn’t do what? I understand Eliezer to be saying that he figured out AI risk via thinking things through himself (e.g., writing a story that involved outcome pumps; reflecting on orthogonality and instrumental convergence; etc.), rather than being argued into it by someone else who was worried about AI risk. If Eliezer didn’t do that, there would still presumably be someone prior to him who did that, since conclusions and ideas have to enter the world somehow. So I’m not understanding what you’re modeling as ridiculous.
My understanding of the history is that Eliezer did not realize the importance of alignment at first, and that he only did so later after arguing about it online with people like Nick Bostrom. See e.g. this thread. I don’t know enough of the history here, but it also seems logically possible that Bostrom could have, say, only realized the importance of alignment after conversing with other people who also didn’t realize the importance of alignment. In that case, there might be a “bubble” of humans who together satisfy the null string criterion, but no single human who does.
The null string criterion does seem a bit silly nowadays since I think the people who would have satisfied it would have sooner read about AI risk on e.g. LessWrong. So they wouldn’t even have the chance to live to age ~21 to see if they spontaneously invent the ideas.
How to get people to produce more great exposition? Some strategies and their assumptions
A scheme for sampling durable goods first-hand before making a purchase
With help from David Manheim, this post has now been turned into a paper. Thanks to everyone who commented on the post!
Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety
Would you say you are traumatized/did unschooling traumatize you/did attending the public high school and college traumatize you?
Do you have a sense of where your anxiety/distractability/”minor mental health problems” came from?
What was the chain of events leading up to you discovering LessWrong/the rationality community?
Vipul Naik has discovered that Alfred Marshall had basically the same idea (he even used the phrase “burn the mathematics”!) way back in 1906 (!), although he only described the procedure as a way to do economics research, rather than for decision-making. I’ve edited the wiki page to incorporate this information.
Thanks, I have added the quote to the page.
Lately I have been daydreaming about a mathematical monastery. I don’t know how coherent the idea is, and would be curious to hear feedback.
A mathematical monastery is a physical space where people gather to do a particular kind of math. The two main activities taking place in a mathematical monastery are meditative math and meditation about one’s relationship to math.
Meditative math: I think a lot of math that people do happens in a fast-paced and unreflective way. What I mean by this is that people solve a bunch of exercises, and then move on quickly to the next thing. There is a rush to finish the problem set or textbook or course and to progress to the main theorems or a more advanced course or the frontier of knowledge so that one might add to it. I think all of this can be good. But sometimes it’s nice to slow way down, to focus on the basics, or pay attention to how one’s mind is representing the mathematical object, or pay attention to how one just solved a problem. What associations did my mind make? Can I write down a stream-of-consciousness log of how I solved a problem? Did I get a gut sense of how long a problem would take me, and how reliable was that gut sense? Are the pictures I see in my head the same as the ones you see in yours? How did the first person who figured this out do so, and what was going on in their mind? Or how might someone have discovered this, even if it is not historically accurate? If I make an error while working on a problem, can I do a stack trace on that? How does this problem make me feel? What are the different kinds of boredom one can feel while doing math? All of these questions would get explored in meditative math.
Mediation about one’s relationship to math: Here the idea is to think about questions like: Why am I interested in math? What do I want to get out of it? What meaning does it give to my life? Why do I want to spend marginal time on math (rather than on other things)? If I had a lot more money, or a more satisfying social life, would I still be interested in doing math? How can I get better at math? What even does it mean to get better at math? Like, what are the different senses in which one can be “better at math”, and which ones do I care about and why? Why do I like certain pieces of math better than others, and why does someone else like some other piece of math better?
As the links above show, some of this already happens in bits and pieces, in a pretty solitary manner. I think it would be nice if there was a place where it could happen in a more concentrated way and where people could get together and talk about it as they are doing it.
Above I focused on how being at a mathematical monastery differs from regular mathematical practice. But it also differs from being at a monastery. For example, I don’t think a strict daily schedule will be an emphasis. I also imagine people would be talking to each other all the time, rather than silently meditating on their own.
Besides monasteries and cults, I think Recurse Center is the closest thing I know about. But my understanding is that Recurse Center has a more self-study/unschooling feel to it, rather than a “let’s focus on what our minds and emotions are doing with regard to programming” feel to it.
I don’t think there is anything too special about math here. There could probably be a “musical monastery” or “drawing monastery” or “video game design monastery” or whatever. Math just happens to be what I am interested in, and that’s the context in which these thoughts came to me.
What does “±8 relationships” mean? Is that a shorthand for 0±8, and if so, does that mean you’re giving the range 0-8, or are you also claiming you’ve potentially had a negative number of relationships (and if so what does that mean)? Or does it mean “8±n relationships”, for some value of n?
Seems like you were right, and the Peter in question is Peter Eckersley. I just saw in this post:
That post did not link to a source, but I found this tweet where Brian Christian says: