It is supposed to create a way of talking about things.
For example we can examine what facets the rationalist community is good at using (good at reasearch, good at optimisation, trying to get better at positive sum social stuff, not so good at encouraging paradigm shifting IMO).
Or you can try and unpick the history of intelligence in terms of facets. You can then talk about what happened at the birth of humanity and figure out it is applicability as evidence to the creation of AI. On the facet view of things we adopted a new facet of being to able to absorb and also output linguistic information, so a new epistemic facet. Are the things we posit AIs can do that we can’t as big deal as this kind of epistemic facet or bigger? Should we expect them to be better paradigm shifters than us?
What makers you think that this community isn’t good at paradigm shifting? Which kind of communties do you consider good at paradigm shifting and how do you evaluate that they are better?
Most paradigm shifters fail. But science at least gives them some space, lots of prestige and a record of their failure.
The rationalist community emphasis on expected utility (especially on the individual scale) with things like, giving what you can, recommending comfortable jobs and predictable charities means that the brightest and best might not be trying to paradigm shift.
Who knows what the utility for humanity of unifying quantum physics and relativity would be? We don’t know if it will enable new inventions or just be pleasing maths. It is a known unknown, there are probably still unknown unknowns out there.
Our requirements for evidence and references also doesn’t fit well with paradigm shifting as you might need to spend time discussing and refining ideas before you can figure out how to build equipment to collect evidence for your ideas. There is not a good space in the rationalist community for that.
Other things that might help paradigm shifters: Having good documents of all the failed ideas and descriptions of why they failed might stop people repeating themselves and show people fallow areas to avoid.
I think the karma ranking system also do not help paradigm shifting discussion on LW. You would likely be down voted by people who are invested in the paradigm you are trying shift (as Kelvin pronounced X-Rays as a hoax). You are explicitly talking about things that seem unlikely and/or that you don’t have a good language for.
If you believe Kuhn than normal science doesn’t constantly change it’s paradigms. It’s something that happens less than once per decade.
Most paradigm shifters fail. But science at least gives them some space, lots of prestige and a record of their failure.
That’s not true. A person who doesn’t play according to the rules of the established paradigm get’s no prestige in science.
I have a hard time getting a Wikipedia page for Thomas Hanna being considered notable (and thus getting transfered from draft space to the Wikipedia main space) despite him being accomplished as a creator of paradigms.
If I look at new paradigms than I see MIRI approach to AI that’s a different paradigm than what existed before. MIRI does get funding despite not being a predictable charity. MIRI even gets funding from OpenPhil.
The same goes for CFAR. It uses a different paradigm than academics when the research decision making.
MetaMed was a new paradigm but failed.
Effective Altruism is a new paradigm.
A month ago I wrote a paradigm post on LW. It doesn’t have that many upvotes but it has engagement. I also got before engagement when I circulated the draft among LW Berlin.
That paradigm in that form is the result of five years of thinking since I did a lot for Quantified Self in 2011. I did a lot of experimenting with credence calibration in different forms and had discussions about it till the idea of the “prediction based medicine” paradigm arose.
Our requirements for evidence and references also doesn’t fit well with paradigm shifting as you might need to spend time discussing and refining ideas before you can figure out how to build equipment to collect evidence for your ideas. There is not a good space in the rationalist community for that.
A while ago a poster on LW asked a question about how to estimate the probability that a woman he asks out for a date says “Yes”.
Another poster complained that this is an example of LW doing everything wrong and being too nerdy and simply not getting what dating is about. The paradigm in which you write down a probability for the likelihood of getting a “Yes” feels foreign. The practical value of writing down the probability isn’t apparent.
The fact that the practical value isn’t apparent didn’t prevent this from being discussed on LW. The lack of evidence or references also didn’t prevent the idea from being discussed on LW.There a place for posts like that on LW. Discussing them leads to refinement of ideas.
Other things that might help paradigm shifters: Having good documents of all the failed ideas and descriptions of why they failed might stop people repeating themselves and show people fallow areas to avoid.
I don’t see the point of that when it comes to changing paradigm in Kuhn’s sense.
That’s not true. A person who doesn’t play according to the rules of the established paradigm get’s no prestige in science.
Physicists seems to be able to go off paradigm and explore promising paradigms like string theory etc and still be prestigious. There being a long standing discrepancy and a known history of paradigm shifts probably makes them more open to it. There are things that are hard for them to explore though, like data that violates conservation of energy. Mainstream ML/AI seems a lot more conservative in comparison.
My statements about the rationalist community are based a bit off trying to get a single locus of community (e.g. lesswrong or arbital etc). See my reply to Villiam for what I think someone should try.
My understanding of Kuhnian revolutions is that they happen in the following way.
1) There is either
a) collection of data to try and refine current in-paradigm measurements and a surprise occurs
b) existing phenomena not predicted by the in-paradigm theorems
2) Someone creates some language to try and explain that measurement/phenomena.
3) There is a period of language refinement and measurement refinement until a new language is stable.
Agreed? None of the paradigm shifts you mention quite fit this mold. They do change the language around things, but not to improve peoples prediction about the world (at least in a scientific sense). They seek to improve how we interact with the world.
I also think they have been created outside the harsh reality of the on-line rationalist community (the only off-line rationalist community I’ve experience with is London).
The concepts and language for MIRI started on the Extropian and SL4 mailing lists (before even overcoming bias) which were very free wheeling and allowed far out ideas.
Similarly the concepts and language for EA have probably been marinating in whatever social groups Peter Singer was hanging around with in since he started writing about rationality and charities (’96?). I don’t know what that was like, probably philosophical which is all about the discussion.
Was Metamed born inside LW or apart from it? Could we improve how things like that are born and learn from their mistakes?
I’ve not been to any CFAR workshop so I’m not sure of it’s content and genesis.
I like your prediction-based medicine post. It would be nice if there was a website where that sort of thing was the norm. And we could build expertise on how to test small parts of it.
I don’t see the point of that when it comes to changing paradigm in Kuhn’s sense.
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before. If you apply the no free lunch theorems of search avoiding sampling the same spots of the search space again is the only thing to do to guarantee that your algorithm is better (on average) than any other.
Physicists seems to be able to go off paradigm and explore promising paradigms like string theory etc and still be prestigious.
In the paradigm of the various allowed physical theories really that different in nature? It seems to me like all those physicists basically agree on a method like building the large Hadroon colider to find out things in detail and generally doing a lot of mathematical proofs.
But it might be that I simply don’t know enough about theoretical physics.
I personally care more about biology, medicine, and general decision making and I’m fine with having hole in my knowledge about physics.
Was Metamed born inside LW or apart from it?
I think that depends on how broadly you see LW. It’s wasn’t directly created by online discussion. On the other hand the discussion about Bayesianism on LessWrong was likely very influential for them getting their concept together.
They do change the language around things, but not to improve peoples prediction about the world (at least in a scientific sense). They seek to improve how we interact with the world
GiveWell’s way of evaluating charities improves the ability of the people to make predictions about which charities will produce worthwhile effects in the future.
An idea like “room for funding” allows us to make better predictions about the effects of funding a charity.
GiveWell is a science project in the Kuhnian sense. GiveWell makes progress on getting better at evaluating causes. At least I believe that there’s progress.
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before.
Many startups fail because they are too early. Today I can buy pet food online but that wasn’t a good business model in the DotCom boom.
If somebody would have had the idea of molecular biology in the 1920′s his paradigm would have failed.
Money quote (and part of the reason why I believe the current scientific system is bad at paradigm change);
`It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What are the implications of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.
SB: He wouldn’t have survived. Even God wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments (creating the universe), but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently? And a third would say, to top it all, he published it all in an un-refereed journal (The Bible).
So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. Singapore is full of them too. Everybody has what are called key performance indicators. But everybody has them. You have to justify them.
I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.`
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before.
Sydney Brenner : I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.
The AirBnB folks didn’t even know of CouchSurfing when they started AirBnB.
Reading a long list about why ideas fail gives you a perspective of how the person who evaluated them considered them failing. It doesn’t give you necessarily a new perspective.
GiveWell’s way of evaluating charities improves the ability of the people to make predictions about which charities will produce worthwhile effects in the future.
I’ll have to think more about this.
I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.`
I agree with this. But I still think science has something that the rationality community currently lacks. A place for people with shared interests to meet. Schelling points for people who might paradigm shift a certain subject. If you can fit your interest into LW or EA, or happen to have people in your area you are okay. Otherwise you are out of luck.
Sydney Brenner : I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.
Reading a long list about why ideas fail gives you a perspective of how the person who evaluated them considered them failing. It doesn’t give you necessarily a new perspective
I don’t think young people should be forced to read everything other people have done before discussing or formulating a hypotheses. But they should be able to explain how their hypothesis differs from things that have gone before if they want large amounts of funding. That should be what literature reviews are about IMO.
The AirBnB folks didn’t even know of CouchSurfing when they started AirBnB.
With companies, the environment is always changing so there is less value in knowing about previous attempts. How many rationalist enterprises fit that mold rather than the scientific one I don’t know.
With companies, the environment is always changing so there is less value in knowing about previous attempts. How many rationalist enterprises fit that mold rather than the scientific one I don’t know.
In science the available tools are also always changing. Molecular biology couldn’t have been done fifty years earlier.
Paradigm change is quite often due to new tools that allow new ways of doing research.
If you can fit your interest into LW or EA, or happen to have people in your area you are okay.
I don’t think the goal of providing places to meet for every possible interest is valuable. You can’t discuss every kind of question at a university either. The kind of questions that GiveWell investigates aren’t traditional academic questions.
I think the domain of questions that can be ask in LW and EA is quite broad.
When it comes to places to meet events like the European Community Weekend are good. EA Global also exists for getting people to meet.
The Accelerator Project might create a more permanent place for people to go and find people to do projects with.
I understand this to mean that focusing on the expected utility may lead to premature optimization, especially with regard to the unknown unknowns. Rejecting a whole line of thought, just because it seems from the outside that the probability of it being useful is very low.
Makes sense. But I think that maybe this is more (or also) about the size of the community than about its attitude. I mean, if you would have 1000 active LW members, then even if you present an idea that 99% of them reject, the remaining 10 people can organize their own discussion and explore the idea.
But I think that maybe this is more (or also) about the size of the community than about its attitude. I mean, if you would have 1000 active LW members, then even if you present an idea that 99% of them reject, the remaining 10 people can organize their own discussion and explore the idea.
You’d need a change in karma so that the post that was reject wasn’t buried. And that other people who see the post aren’t put off from posting speculative things by the karma hit.
Also are you expecting 1000 users in discussion, because I don’t think speculative posts will get into main.
I think it would be better if there was a separate forum to lesswrong that allowed/encouraged builders of systems and data hunters to gather with theory builders to work off the data. With a public record of the failures, so that people who are searching can know what has been built and failed to find anything interesting. Maybe it can be pseudo-anonymous so that people can work without fear to their reputation. Or maybe we should work harder at not worrying about being wrong, as long as we were wrong in good faith. Being secretly wrong is a sad thing to have to do in a rationalist community.
And lesswrong could be a literature review of the successes (meant and otherwise) so that people who are trying to execute can get the best knowledge without lots of cruft.
Am I supposed to learn something from this or is this more of a memo to yourself?
I was wondering if it pattern matched to some other writing I could reference. If it doesn’t, I should expand on it.
It is supposed to create a way of talking about things.
For example we can examine what facets the rationalist community is good at using (good at reasearch, good at optimisation, trying to get better at positive sum social stuff, not so good at encouraging paradigm shifting IMO).
Or you can try and unpick the history of intelligence in terms of facets. You can then talk about what happened at the birth of humanity and figure out it is applicability as evidence to the creation of AI. On the facet view of things we adopted a new facet of being to able to absorb and also output linguistic information, so a new epistemic facet. Are the things we posit AIs can do that we can’t as big deal as this kind of epistemic facet or bigger? Should we expect them to be better paradigm shifters than us?
What makers you think that this community isn’t good at paradigm shifting? Which kind of communties do you consider good at paradigm shifting and how do you evaluate that they are better?
Normal science. For what it is worth.
Most paradigm shifters fail. But science at least gives them some space, lots of prestige and a record of their failure.
The rationalist community emphasis on expected utility (especially on the individual scale) with things like, giving what you can, recommending comfortable jobs and predictable charities means that the brightest and best might not be trying to paradigm shift.
Who knows what the utility for humanity of unifying quantum physics and relativity would be? We don’t know if it will enable new inventions or just be pleasing maths. It is a known unknown, there are probably still unknown unknowns out there.
Our requirements for evidence and references also doesn’t fit well with paradigm shifting as you might need to spend time discussing and refining ideas before you can figure out how to build equipment to collect evidence for your ideas. There is not a good space in the rationalist community for that.
Other things that might help paradigm shifters: Having good documents of all the failed ideas and descriptions of why they failed might stop people repeating themselves and show people fallow areas to avoid.
I think the karma ranking system also do not help paradigm shifting discussion on LW. You would likely be down voted by people who are invested in the paradigm you are trying shift (as Kelvin pronounced X-Rays as a hoax). You are explicitly talking about things that seem unlikely and/or that you don’t have a good language for.
I think we could do better than normal science.
If you believe Kuhn than normal science doesn’t constantly change it’s paradigms. It’s something that happens less than once per decade.
That’s not true. A person who doesn’t play according to the rules of the established paradigm get’s no prestige in science.
I have a hard time getting a Wikipedia page for Thomas Hanna being considered notable (and thus getting transfered from draft space to the Wikipedia main space) despite him being accomplished as a creator of paradigms.
If I look at new paradigms than I see MIRI approach to AI that’s a different paradigm than what existed before. MIRI does get funding despite not being a predictable charity. MIRI even gets funding from OpenPhil.
The same goes for CFAR. It uses a different paradigm than academics when the research decision making.
MetaMed was a new paradigm but failed.
Effective Altruism is a new paradigm.
A month ago I wrote a paradigm post on LW. It doesn’t have that many upvotes but it has engagement. I also got before engagement when I circulated the draft among LW Berlin.
That paradigm in that form is the result of five years of thinking since I did a lot for Quantified Self in 2011. I did a lot of experimenting with credence calibration in different forms and had discussions about it till the idea of the “prediction based medicine” paradigm arose.
A while ago a poster on LW asked a question about how to estimate the probability that a woman he asks out for a date says “Yes”. Another poster complained that this is an example of LW doing everything wrong and being too nerdy and simply not getting what dating is about. The paradigm in which you write down a probability for the likelihood of getting a “Yes” feels foreign. The practical value of writing down the probability isn’t apparent.
The fact that the practical value isn’t apparent didn’t prevent this from being discussed on LW. The lack of evidence or references also didn’t prevent the idea from being discussed on LW.There a place for posts like that on LW. Discussing them leads to refinement of ideas.
I don’t see the point of that when it comes to changing paradigm in Kuhn’s sense.
Physicists seems to be able to go off paradigm and explore promising paradigms like string theory etc and still be prestigious. There being a long standing discrepancy and a known history of paradigm shifts probably makes them more open to it. There are things that are hard for them to explore though, like data that violates conservation of energy. Mainstream ML/AI seems a lot more conservative in comparison.
My statements about the rationalist community are based a bit off trying to get a single locus of community (e.g. lesswrong or arbital etc). See my reply to Villiam for what I think someone should try.
My understanding of Kuhnian revolutions is that they happen in the following way.
1) There is either a) collection of data to try and refine current in-paradigm measurements and a surprise occurs b) existing phenomena not predicted by the in-paradigm theorems
2) Someone creates some language to try and explain that measurement/phenomena.
3) There is a period of language refinement and measurement refinement until a new language is stable.
Agreed? None of the paradigm shifts you mention quite fit this mold. They do change the language around things, but not to improve peoples prediction about the world (at least in a scientific sense). They seek to improve how we interact with the world.
I also think they have been created outside the harsh reality of the on-line rationalist community (the only off-line rationalist community I’ve experience with is London).
The concepts and language for MIRI started on the Extropian and SL4 mailing lists (before even overcoming bias) which were very free wheeling and allowed far out ideas.
Similarly the concepts and language for EA have probably been marinating in whatever social groups Peter Singer was hanging around with in since he started writing about rationality and charities (’96?). I don’t know what that was like, probably philosophical which is all about the discussion.
Was Metamed born inside LW or apart from it? Could we improve how things like that are born and learn from their mistakes?
I’ve not been to any CFAR workshop so I’m not sure of it’s content and genesis.
I like your prediction-based medicine post. It would be nice if there was a website where that sort of thing was the norm. And we could build expertise on how to test small parts of it.
If you consider human society as exploring the world of ideas, it makes sense to leave enough information so that people can see where you have been before. If you apply the no free lunch theorems of search avoiding sampling the same spots of the search space again is the only thing to do to guarantee that your algorithm is better (on average) than any other.
In the paradigm of the various allowed physical theories really that different in nature? It seems to me like all those physicists basically agree on a method like building the large Hadroon colider to find out things in detail and generally doing a lot of mathematical proofs.
But it might be that I simply don’t know enough about theoretical physics.
I personally care more about biology, medicine, and general decision making and I’m fine with having hole in my knowledge about physics.
I think that depends on how broadly you see LW. It’s wasn’t directly created by online discussion. On the other hand the discussion about Bayesianism on LessWrong was likely very influential for them getting their concept together.
GiveWell’s way of evaluating charities improves the ability of the people to make predictions about which charities will produce worthwhile effects in the future.
An idea like “room for funding” allows us to make better predictions about the effects of funding a charity.
GiveWell is a science project in the Kuhnian sense. GiveWell makes progress on getting better at evaluating causes. At least I believe that there’s progress.
Many startups fail because they are too early. Today I can buy pet food online but that wasn’t a good business model in the DotCom boom. If somebody would have had the idea of molecular biology in the 1920′s his paradigm would have failed.
http://kingsreview.co.uk/articles/how-academia-and-publishing-are-destroying-scientific-innovation-a-conversation-with-sydney-brenner/ is worth reading for the dawn of molecular biology. Sydney Brenner was one of the people who brought that paradigm to life.
Money quote (and part of the reason why I believe the current scientific system is bad at paradigm change); `It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What are the implications of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.
SB: He wouldn’t have survived. Even God wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments (creating the universe), but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently? And a third would say, to top it all, he published it all in an un-refereed journal (The Bible).
So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. Singapore is full of them too. Everybody has what are called key performance indicators. But everybody has them. You have to justify them.
I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.`
Sydney Brenner :
I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.
The AirBnB folks didn’t even know of CouchSurfing when they started AirBnB.
Reading a long list about why ideas fail gives you a perspective of how the person who evaluated them considered them failing. It doesn’t give you necessarily a new perspective.
I’ll have to think more about this.
I agree with this. But I still think science has something that the rationality community currently lacks. A place for people with shared interests to meet. Schelling points for people who might paradigm shift a certain subject. If you can fit your interest into LW or EA, or happen to have people in your area you are okay. Otherwise you are out of luck.
I don’t think young people should be forced to read everything other people have done before discussing or formulating a hypotheses. But they should be able to explain how their hypothesis differs from things that have gone before if they want large amounts of funding. That should be what literature reviews are about IMO.
With companies, the environment is always changing so there is less value in knowing about previous attempts. How many rationalist enterprises fit that mold rather than the scientific one I don’t know.
In science the available tools are also always changing. Molecular biology couldn’t have been done fifty years earlier. Paradigm change is quite often due to new tools that allow new ways of doing research.
I don’t think the goal of providing places to meet for every possible interest is valuable. You can’t discuss every kind of question at a university either. The kind of questions that GiveWell investigates aren’t traditional academic questions.
I think the domain of questions that can be ask in LW and EA is quite broad.
When it comes to places to meet events like the European Community Weekend are good. EA Global also exists for getting people to meet.
The Accelerator Project might create a more permanent place for people to go and find people to do projects with.
I understand this to mean that focusing on the expected utility may lead to premature optimization, especially with regard to the unknown unknowns. Rejecting a whole line of thought, just because it seems from the outside that the probability of it being useful is very low.
Makes sense. But I think that maybe this is more (or also) about the size of the community than about its attitude. I mean, if you would have 1000 active LW members, then even if you present an idea that 99% of them reject, the remaining 10 people can organize their own discussion and explore the idea.
You’d need a change in karma so that the post that was reject wasn’t buried. And that other people who see the post aren’t put off from posting speculative things by the karma hit.
Also are you expecting 1000 users in discussion, because I don’t think speculative posts will get into main.
I think it would be better if there was a separate forum to lesswrong that allowed/encouraged builders of systems and data hunters to gather with theory builders to work off the data. With a public record of the failures, so that people who are searching can know what has been built and failed to find anything interesting. Maybe it can be pseudo-anonymous so that people can work without fear to their reputation. Or maybe we should work harder at not worrying about being wrong, as long as we were wrong in good faith. Being secretly wrong is a sad thing to have to do in a rationalist community.
And lesswrong could be a literature review of the successes (meant and otherwise) so that people who are trying to execute can get the best knowledge without lots of cruft.