(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don’t think it’d be overkill.
(2) There is a profound degree of technical talent here in central Colorado which doesn’t currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.
I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn’t rigorous enough or something, despite the fact that on the MIRIx webpage it says:
“A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together.”
Which is exactly what I was proposing.
I have tried for years to network with people in the futurist/rationalist movement, by offering to write for various websites and blogs (and being told no each and every single time), or by trying to discuss novel rationality techniques with people positioned to provide useful feedback (and being ignored each and every single time).
While I may not be Eliezer Yudkowsky the evidence indicates that I’m at least worth casually listening to, but I have had no luck getting even that far.
I left a cushy job in Asia because I wanted to work toward making the world a better place, and I’m not content simply giving money to other people to do so on my behalf. I have a lot of talent and energy which could be going towards that end; for whatever reason, the existing channels have proven to be dead ends for me.
But even if the above were not the case, there is an extraordinary amount of technical talent in the front range which could be going towards more future-conscious work. Most of these people probably haven’t heard of LW or don’t care much about it (as evinced by the moribund LW meetup in Boulder and the very, very small one in Denver), but they might take notice if there were a futurist institution within driving distance.
Approaching from the other side, I’ve advertised futurist-themed talks on LW numerous times and gotten, like, three people to attend.
I’ll continue donating to CFAR/MIRI because they’re doing valuable work, but I also want to work on this stuff directly, and I haven’t been able to do that with existing structures.
So I’m going to build my own. If you have any useful advice for that endeavor, I’d be happy to hear it.
Maybe your mistake was to write a book about your experience of self-study instead of making a series of LW posts. Nate Soares took this approach and he is now the executive director of MIRI :P
I gave that some thought! LW seems much less active than it once was, though, so that strategy isn’t as appealing. I’ve also written a little for this site and the reception has been lukewarm, so I figured a book would be best.
Different reasons, none of them nefarious or sinister.
I emailed a technique I call ‘the failure autopsy’ to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful ’I’ll read this when I get a chance” and never got back to me.
I’m not sure why I was turned down for a MIRIx workshop; I’m sure I could’ve managed to get some friends together to read papers and write ideas on a whiteboard.
I’ve written a few essays for LW the reception of which were lukewarm. Don’t know if I’m just bad at picking topics of interest or if it’s a reflection of the declining status of this forum.
To be clear: I didn’t come here to stamp my feet and act like a prissy diva. I don’t think the rationalists are big meanies who are deliberately singling me out for exclusion. I’m sure everyone has 30,000 emails to read and a million other commitments and they’re just busy.
But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.
You might be wondering whether or not I’m just not as smart or as insightful as I think I am. That’s a real possibility, but it’s worth pointing out that I also emailed the failure autopsy technique to Eric S. Raymond—famed advocate of open source, bestselling author, hacker, philosopher, righteous badass—and he not only gave me a lot of encouraging feedback, he took time out of his schedule to help me refine some of my terminology to be more descriptive. We’re actually in talks to write a book together next year.
So it might be me, but there’s evidence to indicate that it probably isn’t.
Try publishing in mainstream AI venues? (AAAI has some sort of safety workshop this year). I am assuming if you want to start an institute you have publishable stuff you want to say.
I like that idea too. How hard is it to publish in academic journals? I don’t have more than a BS, but I have done original research and I can write in an academic style.
It’s weird, though, if you are asking these types of questions, why are you trying to run an institute? Typically very senior academics do that. (I am not singling you out either, I have the same question for folks running MIRI).
But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.
From the outside view a person who has no luck building contacts with existing institutions is unlikely to be a good person to start a new institute.
Of course getting someone like Eric S. Raymond to be open to write a book with you is a good sign.
A post-mortem isn’t quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.
The Future of Life Institute thinks that a portfolio approach to AI safety, where different groups pursue different research agendas, is best. It’s plausible to me that we’ve hit the point of diminishing returns in terms of allocating resources to MIRI’s approach, and marginal resources are best directed towards starting new research groups.
You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office.
There is no need to start any institute if you don’t have any dedicated group of people around. Institute consisting of one person is something strange.
That’s not a bad idea. As it stands I’m pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I’ll want to move forward with the institute, though, and it seems wise to begin thinking about that now.
Why create a new one at all?
(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don’t think it’d be overkill.
(2) There is a profound degree of technical talent here in central Colorado which doesn’t currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.
You know, you could do that. By giving them the money.
I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn’t rigorous enough or something, despite the fact that on the MIRIx webpage it says:
“A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together.”
Which is exactly what I was proposing.
I have tried for years to network with people in the futurist/rationalist movement, by offering to write for various websites and blogs (and being told no each and every single time), or by trying to discuss novel rationality techniques with people positioned to provide useful feedback (and being ignored each and every single time).
While I may not be Eliezer Yudkowsky the evidence indicates that I’m at least worth casually listening to, but I have had no luck getting even that far.
I left a cushy job in Asia because I wanted to work toward making the world a better place, and I’m not content simply giving money to other people to do so on my behalf. I have a lot of talent and energy which could be going towards that end; for whatever reason, the existing channels have proven to be dead ends for me.
But even if the above were not the case, there is an extraordinary amount of technical talent in the front range which could be going towards more future-conscious work. Most of these people probably haven’t heard of LW or don’t care much about it (as evinced by the moribund LW meetup in Boulder and the very, very small one in Denver), but they might take notice if there were a futurist institution within driving distance.
Approaching from the other side, I’ve advertised futurist-themed talks on LW numerous times and gotten, like, three people to attend.
I’ll continue donating to CFAR/MIRI because they’re doing valuable work, but I also want to work on this stuff directly, and I haven’t been able to do that with existing structures.
So I’m going to build my own. If you have any useful advice for that endeavor, I’d be happy to hear it.
Maybe your mistake was to write a book about your experience of self-study instead of making a series of LW posts. Nate Soares took this approach and he is now the executive director of MIRI :P
I gave that some thought! LW seems much less active than it once was, though, so that strategy isn’t as appealing. I’ve also written a little for this site and the reception has been lukewarm, so I figured a book would be best.
We’re now a lot more active at LW2.0! Some of my stuff which wasn’t that popular here is getting more attention there.
Maybe you could try it too?
Do you know why?
Different reasons, none of them nefarious or sinister.
I emailed a technique I call ‘the failure autopsy’ to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful ’I’ll read this when I get a chance” and never got back to me.
I’m not sure why I was turned down for a MIRIx workshop; I’m sure I could’ve managed to get some friends together to read papers and write ideas on a whiteboard.
I’ve written a few essays for LW the reception of which were lukewarm. Don’t know if I’m just bad at picking topics of interest or if it’s a reflection of the declining status of this forum.
To be clear: I didn’t come here to stamp my feet and act like a prissy diva. I don’t think the rationalists are big meanies who are deliberately singling me out for exclusion. I’m sure everyone has 30,000 emails to read and a million other commitments and they’re just busy.
But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.
You might be wondering whether or not I’m just not as smart or as insightful as I think I am. That’s a real possibility, but it’s worth pointing out that I also emailed the failure autopsy technique to Eric S. Raymond—famed advocate of open source, bestselling author, hacker, philosopher, righteous badass—and he not only gave me a lot of encouraging feedback, he took time out of his schedule to help me refine some of my terminology to be more descriptive. We’re actually in talks to write a book together next year.
So it might be me, but there’s evidence to indicate that it probably isn’t.
Try publishing in mainstream AI venues? (AAAI has some sort of safety workshop this year). I am assuming if you want to start an institute you have publishable stuff you want to say.
I like that idea too. How hard is it to publish in academic journals? I don’t have more than a BS, but I have done original research and I can write in an academic style.
Pretty hard, I suppose.
It’s weird, though, if you are asking these types of questions, why are you trying to run an institute? Typically very senior academics do that. (I am not singling you out either, I have the same question for folks running MIRI).
From the outside view a person who has no luck building contacts with existing institutions is unlikely to be a good person to start a new institute.
Of course getting someone like Eric S. Raymond to be open to write a book with you is a good sign.
Ahem. The rest of the world calls it a post-mortem. See e.g. this.
So you do not know why. Did you try to figure it out? Do a post-mortem, maybe?
A post-mortem isn’t quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.
https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/
This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg
Neat little names, I see. Thank you, I’ll pass on the jpg awesomeness.
The Future of Life Institute thinks that a portfolio approach to AI safety, where different groups pursue different research agendas, is best. It’s plausible to me that we’ve hit the point of diminishing returns in terms of allocating resources to MIRI’s approach, and marginal resources are best directed towards starting new research groups.
I hadn’t known about that, but I came to the same conclusion!
You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office.
There is no need to start any institute if you don’t have any dedicated group of people around. Institute consisting of one person is something strange.
That’s not a bad idea. As it stands I’m pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I’ll want to move forward with the institute, though, and it seems wise to begin thinking about that now.
Why create any of them?