I strongly agree with Owen’s suggestions about figuring out a plan grounded in current circumstances, rather than reproducing what was.
Here’s some potentially useful directions to explore.
Just to be clear, I’m not claiming that it should adopt all of these. Indeed, an attempt to adopt all of these would likely be incoherent and attempting to pursue too many different directions at the same time.
These are just possibilities, some subset of which is hopefully useful:
Rationality as more of a focus area: Given that Lightcone runs Less Wrong, an obvious path to explore is whether rationality could be further developed by providing people either a fellowship or a permanent position to work on developing the art:
Being able to offer such paid positions might allow you to draw contributions from people with rare backgrounds. For example, you might decide it would be useful to better understand anthropology as a way of better understanding other cultures and practices and so you could directly hire an anthropologist to help with that.
It would also help with projects that would be valuable, but which would be a slog and require specific expertise. For example, it would be great to have someone update the sequences in light of more recent psychological research.
Greater focus on entrepreneurship:
You’ve already indicated your potential interest in taking it this direction by adding it as one of the options on your form.
This likely makes sense given that Lightcone is located in the Bay Area, the city with the most entrepreneurs and venture capitalists in the world.
Insofar as a large part of the impact of FHI was the projects it inspired elsewhere, it may make sense to more directly attempt this kind of incubation.
Response to the rise of AI:
One of the biggest shifts in the world since FHI was started has been the dramatic improvements in AI
One response to this would be to focus more on the risks and impacts from AI. However, there are already a number of institutions focusing on this, so this might simply end up being a worse version of them:
You may also think that you might be able to find a unique angle, for example, given how Eliezer was motivated to create rationality in order to help people understand his arguments on AI safety, it might be valuable for there to be a research program which intertwines those two elements.
Or you might identify areas, such as AI consciousness, that are still massively neglected
Another response would be to try to figure out how to leverage AI:
Would it make sense to train an AI agent on Less Wrong content?
As an example, how could AI be used to develop wisdom?
Another response would be to decide that better orgs are positioned to pursue these projects.
Is there anything in the legacy of MIRI, CFAR of FHI that is particularly ripe for further development?:
For example, would it make sense for someone to try to publish an explanation of some of the ideas produced by MIRI on decision theory in a mainstream philosophical journal?
Perhaps some techniques invented by CFAR could be tested with a rigorous academic study?
Potential new sources of ideas:
There seems to have been a two-way flow of ideas between LW/EA and FHI.
While there may still be more ideas within these communities that are deserving of further exploration, it may also make sense to consider whether there any new communities that could provide a novel source of ideas?:
A few possibilities immediately come to mind: post-rationality, progress studies, sensemaking, meditation, longevity, predictions.
Less requirement for legibility than FHI:
While FHI leaned towards the speculative end of academia, there was still a requirement for projects to still be at least somewhat academically legibility. What is enabled by no longer having that kind of requirement?
Opportunities for alpha from philosophical rigour:
This was one of the strengths of FHI—bringing philosophical rigour to new areas. It may be worth exploring how this could be preserved/carried forward?
One of the strengths of academic philosophy—compared to the more casual writing that is popular on Less Wrong—is its focus on rigour and drawing out distinctions. If this institute were able to recruit people with strong philosophical backgrounds, are there any areas that would be particularly ripe for applying this style of thinking?
Pursuing this direction might be a mistake if you would struggle to recruit the right people. It may turn out that the placement of FHI within Oxford was vital for drawing the philosophical talent of the calibre that they drew.
I agree in the abstract with the idea of looking for niches, and I think that several of these ideas have something to them. Nevertheless when I read the list of suggestions my overall feeling is that it’s going in a slightly wrong direction, or missing the point, or something. I thought I’d have a go at articulating why, although I don’t think I’ve got this to the point where I’d firmly stand behind it:
It seems to me like some of the central FHI virtues were:
Offering a space to top thinkers where the offer was pretty much “please come here and think about things that seem important in a collaborative truth-seeking environment”
I think that the freedom of direction, rather than focusing on an agenda or path to impact, was important for:
attracting talent
finding good underexplored ideas (b/c of course at the start of the thinking people don’t know what’s important)
Caveats:
This relies on your researchers having some good taste in what’s important (so this needs to be part of what you select people on)
FHI also had some success launching research groups where people were hired to more focused things
I think this was not the heart of the FHI magic, though, but more like a particular type of entrepreneurship picking up and running with things from the core
Willingness to hang around at whiteboards for hours talking and exploring things that seemed interesting
With an attitude of “OK but can we just model this?” and diving straight into it
Someone once described FHI as “professional amateurs”, which I think is apt
The approach is a bit like the attitude ascribed to physicists in this xkcd, but applied more to problems-that-nobody-has-good-answers-for than things-with-lots-of-existing-study (and with more willingness to dive into understanding existing fields when they’re importantly relevant for the problem at hand)
Importantly mostly without directly asking “ok but where is this going? what can we do about it?”
Prioritization at a local level is somewhat ruthless, but is focused on “how do we better understand important dynamics?” and not “what has external impact in the world?”
Sometimes orienting to “which of our ideas does the world need to know about? what are the best ways to disseminate these?” and writing about those in high-quality ways
I’d draw some contrast with MIRI here, who I think were also good at getting people to think of interesting things, but less good at finding articulations that translated to broadly-accessible ideas
Reading your list, a bunch of it seems to be about decisions about what to work on or what locally to pursue. My feeling is that those are the types of questions which are largely best left open to future researchers to figure out, and that the appropriate focus right now is more like trying to work out how to create the environment which can lead to some of this stuff.
Overall, the take in the previous paragraph is slightly too strong. I think it is in fact good to think through these things to get a feeling for possible future directions. And I also think that some of the good paths towards building a group like this start out by picking a topic or two to convene people on and get them thinking about. But if places want to pick up the torch, I think it’s really important to attend to the ways in which it was special that aren’t necessarily well-represented in the current x-risk ecosystem.
You’d have a much better idea of what made FHI successful than I would. At the same time, I would bet that in order to make this new project successful—and be its own thing—it’d likely have to break at least one assumption behind what made old FHI work well.
Reading your list, a bunch of it seems to be about decisions about what to work on or what locally to pursue.
I think my list appears more this way then I intended because I gave some examples of projects I would be excited by if they happened. I wasn’t intending to stake out a strong position as to whether these projects should projects chosen by the institute vs. some examples of projects that it might be reasonable for a researcher to choose within that particular area.
Makes sense! My inference was because the discussion at this stage is a high-level one about ways to set things up, but it does seem good to have space to discuss object-level projects that people might get into.
I strongly agree with Owen’s suggestions about figuring out a plan grounded in current circumstances, rather than reproducing what was.
Here’s some potentially useful directions to explore.
Just to be clear, I’m not claiming that it should adopt all of these. Indeed, an attempt to adopt all of these would likely be incoherent and attempting to pursue too many different directions at the same time.
These are just possibilities, some subset of which is hopefully useful:
Rationality as more of a focus area: Given that Lightcone runs Less Wrong, an obvious path to explore is whether rationality could be further developed by providing people either a fellowship or a permanent position to work on developing the art:
Being able to offer such paid positions might allow you to draw contributions from people with rare backgrounds. For example, you might decide it would be useful to better understand anthropology as a way of better understanding other cultures and practices and so you could directly hire an anthropologist to help with that.
It would also help with projects that would be valuable, but which would be a slog and require specific expertise. For example, it would be great to have someone update the sequences in light of more recent psychological research.
Greater focus on entrepreneurship:
You’ve already indicated your potential interest in taking it this direction by adding it as one of the options on your form.
This likely makes sense given that Lightcone is located in the Bay Area, the city with the most entrepreneurs and venture capitalists in the world.
Insofar as a large part of the impact of FHI was the projects it inspired elsewhere, it may make sense to more directly attempt this kind of incubation.
Response to the rise of AI:
One of the biggest shifts in the world since FHI was started has been the dramatic improvements in AI
One response to this would be to focus more on the risks and impacts from AI. However, there are already a number of institutions focusing on this, so this might simply end up being a worse version of them:
You may also think that you might be able to find a unique angle, for example, given how Eliezer was motivated to create rationality in order to help people understand his arguments on AI safety, it might be valuable for there to be a research program which intertwines those two elements.
Or you might identify areas, such as AI consciousness, that are still massively neglected
Another response would be to try to figure out how to leverage AI:
Would it make sense to train an AI agent on Less Wrong content?
As an example, how could AI be used to develop wisdom?
Another response would be to decide that better orgs are positioned to pursue these projects.
Is there anything in the legacy of MIRI, CFAR of FHI that is particularly ripe for further development?:
For example, would it make sense for someone to try to publish an explanation of some of the ideas produced by MIRI on decision theory in a mainstream philosophical journal?
Perhaps some techniques invented by CFAR could be tested with a rigorous academic study?
Potential new sources of ideas:
There seems to have been a two-way flow of ideas between LW/EA and FHI.
While there may still be more ideas within these communities that are deserving of further exploration, it may also make sense to consider whether there any new communities that could provide a novel source of ideas?:
A few possibilities immediately come to mind: post-rationality, progress studies, sensemaking, meditation, longevity, predictions.
Less requirement for legibility than FHI:
While FHI leaned towards the speculative end of academia, there was still a requirement for projects to still be at least somewhat academically legibility. What is enabled by no longer having that kind of requirement?
Opportunities for alpha from philosophical rigour:
This was one of the strengths of FHI—bringing philosophical rigour to new areas. It may be worth exploring how this could be preserved/carried forward?
One of the strengths of academic philosophy—compared to the more casual writing that is popular on Less Wrong—is its focus on rigour and drawing out distinctions. If this institute were able to recruit people with strong philosophical backgrounds, are there any areas that would be particularly ripe for applying this style of thinking?
Pursuing this direction might be a mistake if you would struggle to recruit the right people. It may turn out that the placement of FHI within Oxford was vital for drawing the philosophical talent of the calibre that they drew.
I agree in the abstract with the idea of looking for niches, and I think that several of these ideas have something to them. Nevertheless when I read the list of suggestions my overall feeling is that it’s going in a slightly wrong direction, or missing the point, or something. I thought I’d have a go at articulating why, although I don’t think I’ve got this to the point where I’d firmly stand behind it:
It seems to me like some of the central FHI virtues were:
Offering a space to top thinkers where the offer was pretty much “please come here and think about things that seem important in a collaborative truth-seeking environment”
I think that the freedom of direction, rather than focusing on an agenda or path to impact, was important for:
attracting talent
finding good underexplored ideas (b/c of course at the start of the thinking people don’t know what’s important)
Caveats:
This relies on your researchers having some good taste in what’s important (so this needs to be part of what you select people on)
FHI also had some success launching research groups where people were hired to more focused things
I think this was not the heart of the FHI magic, though, but more like a particular type of entrepreneurship picking up and running with things from the core
Willingness to hang around at whiteboards for hours talking and exploring things that seemed interesting
With an attitude of “OK but can we just model this?” and diving straight into it
Someone once described FHI as “professional amateurs”, which I think is apt
The approach is a bit like the attitude ascribed to physicists in this xkcd, but applied more to problems-that-nobody-has-good-answers-for than things-with-lots-of-existing-study (and with more willingness to dive into understanding existing fields when they’re importantly relevant for the problem at hand)
Importantly mostly without directly asking “ok but where is this going? what can we do about it?”
Prioritization at a local level is somewhat ruthless, but is focused on “how do we better understand important dynamics?” and not “what has external impact in the world?”
Sometimes orienting to “which of our ideas does the world need to know about? what are the best ways to disseminate these?” and writing about those in high-quality ways
I’d draw some contrast with MIRI here, who I think were also good at getting people to think of interesting things, but less good at finding articulations that translated to broadly-accessible ideas
Reading your list, a bunch of it seems to be about decisions about what to work on or what locally to pursue. My feeling is that those are the types of questions which are largely best left open to future researchers to figure out, and that the appropriate focus right now is more like trying to work out how to create the environment which can lead to some of this stuff.
Overall, the take in the previous paragraph is slightly too strong. I think it is in fact good to think through these things to get a feeling for possible future directions. And I also think that some of the good paths towards building a group like this start out by picking a topic or two to convene people on and get them thinking about. But if places want to pick up the torch, I think it’s really important to attend to the ways in which it was special that aren’t necessarily well-represented in the current x-risk ecosystem.
Just thought I’d add a second follow-up comment.
You’d have a much better idea of what made FHI successful than I would. At the same time, I would bet that in order to make this new project successful—and be its own thing—it’d likely have to break at least one assumption behind what made old FHI work well.
I think my list appears more this way then I intended because I gave some examples of projects I would be excited by if they happened. I wasn’t intending to stake out a strong position as to whether these projects should projects chosen by the institute vs. some examples of projects that it might be reasonable for a researcher to choose within that particular area.
Makes sense! My inference was because the discussion at this stage is a high-level one about ways to set things up, but it does seem good to have space to discuss object-level projects that people might get into.