I can see a couple different ways that it could happen. Funders might have trouble judging actual progress in the absence of academic peer-reviewed publications and citations. Especially as more academics join the AI risk field and produce more papers and citations, funders might be tempted to think that they should re-direct resources towards academia (in part for subconscious status reasons). MIRI may have to switch to more academic norms in order to compete, which would then rub off on LW. (This seems to already be happening to some extent.) Or LW moves towards a more academic culture for internal status-economics reasons, and MIRI leaders may not have much control over that. (In that world, maybe LWers will eventually look down upon MIRI for not being sufficiently academic.)
You know what I will say, yall should stay in your lane, re: incentives.
Yudkowsky’s incentives caused him to write HPMOR (which has precisely zero (0) academic value), and publish basically nothing. So as far as the mainstream is concerned his footprint does not exist. He’s collecting a salary at MIRI, presumably. What is that salary buying?
Mainstream academics who collect a salary will say they teach undergraduates, and publish stuff to make grant agencies happy. Some of that stuff is useless, a lot of it is very useful indeed.
Reform attempts for “non-aligned” ecosystems like academia will almost certainly not work because (as you all are well aware) “aligning” is hard.
MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem, if it doesn’t grow it will not have any impact.
You know what I will say, yall should stay in your lane, re: incentives.
I don’t understand this. Please clarify? (Urban dictionary says “stay in your lane” means mind your own business, which is exactly what we’re doing, namely trying to figure out what direction to push our own culture.)
and publish basically nothing
He’s publishing mostly on Arbital these days. See this and this for examples. I’m not sure why he doesn’t at least post links elsewhere to draw people’s attention though. Hopefully that will change after LW 2.0 goes live.
So as far as the mainstream is concerned his footprint does not exist.
I’m not sure what you mean by this either. Certainly the people who work on AI safety at Berkeley, OpenAI, and DeepMind all know about Eliezer and MIRI’s approach to AI alignment, even if they don’t agree that it’s the most promising one. Are you saying that if Eliezer had published in academia, they’d be more inclined to follow that approach, as opposed to the more ML-based approaches that they’re currently following?
MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem
I think having “aligned” human institutions is too much to hope for. As I mentioned elsewhere in this thread, perhaps the best we can do is to have different bad incentives / inefficiencies in different institutions so that they’re able to reach different sets of low hanging fruit, and not all suffer from the same collective blind spots.
I get super annoyed by criticisms of mainstream academia out of the rationality-sphere, I suppose (mostly because it falls into either stuff that every academic already knows about that’s very hard to fix, or just vastly misinformed stuff). Roko the other day on facebook: “academia produces nothing of value.”
I’m not sure what you mean by this either.
I suppose what I mean by this is that academia functions on a dual currency/kudos system. “Academic kudos” is acquired by playing certain formal games within mainstream academia (publications in fancy journals and so on). So, for example, if Tegmark published Life 3.0, and it reached the best-seller list that would not award him a ton of “academic kudos” (well, at least in my opinion, Hanson might disagree). Instead, that would be called “being good with the media.”
“Academic kudos” is a bit different from “I have heard of you.”
I think having “aligned” human institutions is too much to hope for.
I agree with this entire paragraph. I am a big fan of letting a thousand flowers bloom.
As long as MIRI is led and funded by people who care about the actual goal rather than citations, I don’t see why we would go astray.
I can see a couple different ways that it could happen. Funders might have trouble judging actual progress in the absence of academic peer-reviewed publications and citations. Especially as more academics join the AI risk field and produce more papers and citations, funders might be tempted to think that they should re-direct resources towards academia (in part for subconscious status reasons). MIRI may have to switch to more academic norms in order to compete, which would then rub off on LW. (This seems to already be happening to some extent.) Or LW moves towards a more academic culture for internal status-economics reasons, and MIRI leaders may not have much control over that. (In that world, maybe LWers will eventually look down upon MIRI for not being sufficiently academic.)
You know what I will say, yall should stay in your lane, re: incentives.
Yudkowsky’s incentives caused him to write HPMOR (which has precisely zero (0) academic value), and publish basically nothing. So as far as the mainstream is concerned his footprint does not exist. He’s collecting a salary at MIRI, presumably. What is that salary buying?
Mainstream academics who collect a salary will say they teach undergraduates, and publish stuff to make grant agencies happy. Some of that stuff is useless, a lot of it is very useful indeed.
Reform attempts for “non-aligned” ecosystems like academia will almost certainly not work because (as you all are well aware) “aligning” is hard.
MIRI has the same problem everyone else has: if it grows it will become a non-aligned ecosystem, if it doesn’t grow it will not have any impact.
I don’t understand this. Please clarify? (Urban dictionary says “stay in your lane” means mind your own business, which is exactly what we’re doing, namely trying to figure out what direction to push our own culture.)
He’s publishing mostly on Arbital these days. See this and this for examples. I’m not sure why he doesn’t at least post links elsewhere to draw people’s attention though. Hopefully that will change after LW 2.0 goes live.
I’m not sure what you mean by this either. Certainly the people who work on AI safety at Berkeley, OpenAI, and DeepMind all know about Eliezer and MIRI’s approach to AI alignment, even if they don’t agree that it’s the most promising one. Are you saying that if Eliezer had published in academia, they’d be more inclined to follow that approach, as opposed to the more ML-based approaches that they’re currently following?
I think having “aligned” human institutions is too much to hope for. As I mentioned elsewhere in this thread, perhaps the best we can do is to have different bad incentives / inefficiencies in different institutions so that they’re able to reach different sets of low hanging fruit, and not all suffer from the same collective blind spots.
I get super annoyed by criticisms of mainstream academia out of the rationality-sphere, I suppose (mostly because it falls into either stuff that every academic already knows about that’s very hard to fix, or just vastly misinformed stuff). Roko the other day on facebook: “academia produces nothing of value.”
I suppose what I mean by this is that academia functions on a dual currency/kudos system. “Academic kudos” is acquired by playing certain formal games within mainstream academia (publications in fancy journals and so on). So, for example, if Tegmark published Life 3.0, and it reached the best-seller list that would not award him a ton of “academic kudos” (well, at least in my opinion, Hanson might disagree). Instead, that would be called “being good with the media.”
“Academic kudos” is a bit different from “I have heard of you.”
I agree with this entire paragraph. I am a big fan of letting a thousand flowers bloom.