I’m not sure how much raw intelligence matters. If a person who’s average intelligence stays with a problem which doesn’t get much attention for 10 years I see no reason why they shouldn’t be able to contribute something to it.
Being intellectual means staying with intellectual problems over years instead of letting them drop because a new television series is more important.
Since IQ correlates with practically everything, including conscientousness and the ability to concentrate, I’m not convinced this advice is helpful. The average human may be plain unable to meaningfully stick with a problem for ten years. (That is, to actually productively work on the problem daily, not just have it on the to-do list and load up the data or whatever every so often.) I fear the LW bubble gives most people here a rather exaggerated estimate of the “average”; your median acquaintance is likely one or two standard deviations above the real population average, and that already makes a big difference.
The average human may be plain unable to meaningfully stick with a problem for ten years. (That is, to actually productively work on the problem daily, not just have it on the to-do list and load up the data or whatever every so often.)
I don’t think working every day on the problem is necessary. For a lot of problems visiting them monthly does a lot.
If you want to formalize the approach it’s something like: I have learned something new X, how does X related to problem Y_1 to Y_n?
If you inform yourself widely, I think you have the potential to contribute.
Most people aren’t intellectual because they don’t invest any effort in being intellectual.
Since IQ correlates with practically everything, including conscientousness
I think will give three examples of problems with whom I stayed over longer time:
Spaced repetition learning, polyphasic sleep and quantified self.
Quantified Self is the example where I have the most to show publically. I did community work in QS. My name is in a dozen mainstream media pieces in a total of three languages. Piece means either newspaper, radio or TV I did all of them multiple times.
Spaced repetition learning would be one problem which is extremly important but has very few people who are working on it.
The Mnemosyth data lies around for years without anyone analysing it. Going through that data and doing a bit of modeling with it should be easy for anyone who’s searching a bachlor thesis for computer science or otherwise seeks a project.
Another question would be: How do you calculate a good brainperformance score for a given day given Anki review data? (Anki stores all the review data internally in a SQL database)
You don’t need to be a genius to contribute to any of the those two issues. Both problems are pretty straightforward if you can program and have interest in modelling.
Polyphasic sleep is a problem where I would say that I contribute to the discussion. I tried it probably 8⁄9 years ago and I stayed with the problem intellectually.
Last year a friend of mine was trying uberman for a month and in researching the topic he came about something I wrote. When talking with him about the topic he quoted one of my online opinion on the topic to me and at first it surprised me because I haven’t made that point in his physical presence.
The Mnemosyne data lies around for years without anyone analysing it. Going through that data and doing a bit of modeling with it should be easy for anyone who’s searching a bachlor thesis for computer science or otherwise seeks a project.
It’s a real pain to, though, because it’s so big. A month after I started, I’m still only halfway through the logs->SQL step.
It’s a real pain to, though, because it’s so big. A month after I started, I’m still only halfway through the logs-SQL step.
That sounds like you do one insert per transaction which is the default way SQL operates. It possible to batch multiple inserts together to one transaction.
If I remember right the data was something in the size of 10GB. I think that a computer should be able to do the logs->SQL step in less than a day provided one doesn’t do one insert per transaction.
If I remember right the data was something in the size of 10GB.
The .bz2 logs are ~4GB; the half-done SQL database is ~18GB so I infer the final database will be ~36GB.
EDIT: my ultimate solution was to just spend $540 on an SSD, which finished the import process in a day; the final uploaded dataset was 2.8GB compressed and 18GB uncompressed (I’m not sure why it was half the size I expected).
Thanks for the round up! I thought that by “problems” you meant things like the millennium problems and friendly AI and couldn’t picture how average people could make any progress in them (well maybe some with dedication) but these make more sense. How easy it is to get funding for these kind of projects? I’m just wondering because these are a bit fringe issues still, but of course very important.
Quantified Self is in it’s nature about dealing with epistemology. It’s not certain that you will learn something about how an AGI works by doing Quantified Self but the potential is there.
A mathematical model of how human memory works that could be produced by looking at Mnemosyth data could also potentially matter for FAI.
FAI is a hard problem and therefore it’s difficult to predict, where you will find solutions to it.
How easy it is to get funding for these kind of projects?
It very much depends on the project. I don’t know how hard it is to get grants for the spaced repetition problems I mentioned.
I however think that if someone seeks a topic for a bachelor or master thesis, they are good topics if you want an academic career.
The daily Anki score would allow other academics to do experiments of how factor X effects memory. If you provide the metric that they use in their papers they will cite yourself.
I thought that by “problems” you meant things like the millennium problems
I don’t understand why anyone would want to work on the Riemann Hypothesis. It doesn’t seem to be a problem that matters.
It one of those examples that suggests that people are really bad at prioritising. Mathemacians work at it because other mathematician think it’s hard and solving it would impress them.
It has a bit of Terry Pratchett’s Unseen University which was created to prevent powerful wizards from endangering the world by keeping them busy with academic problems. The only difference is that math might advance in a way that makes an AGI possible and is therefore not completely harmless.
I don’t understand why anyone would want to work on the Riemann Hypothesis. It doesn’t seem to be a problem that matters.
Could the fact that it doesn’t seem to have many practical applications is what attracts certain people towards it? It doesn’t have practical applications → it’s “purer” math. You’re not trying to solve the problem for some external reason or using the math as a tool, you’re trying to solve it for its own sake. I remember reading studies that mathematicians are on average more religious than scientists in general and I’ve also gotten the impression that some mathematicians relate to math a bit like it’s religion. There is also this concept: http://en.wikipedia.org/wiki/Mathematical_beauty
It could be that some are just trying to impress others but I don’t think it’s always that simple.
And to my knowledge, there is some application for almost all the math that’s been developed. Of course, if you optimized purely for applications, you might get better results.
I’m not sure how much raw intelligence matters. If a person who’s average intelligence stays with a problem which doesn’t get much attention for 10 years I see no reason why they shouldn’t be able to contribute something to it.
Being intellectual means staying with intellectual problems over years instead of letting them drop because a new television series is more important.
Since IQ correlates with practically everything, including conscientousness and the ability to concentrate, I’m not convinced this advice is helpful. The average human may be plain unable to meaningfully stick with a problem for ten years. (That is, to actually productively work on the problem daily, not just have it on the to-do list and load up the data or whatever every so often.) I fear the LW bubble gives most people here a rather exaggerated estimate of the “average”; your median acquaintance is likely one or two standard deviations above the real population average, and that already makes a big difference.
I don’t think working every day on the problem is necessary. For a lot of problems visiting them monthly does a lot.
If you want to formalize the approach it’s something like: I have learned something new X, how does X related to problem Y_1 to Y_n?
If you inform yourself widely, I think you have the potential to contribute. Most people aren’t intellectual because they don’t invest any effort in being intellectual.
Given that papers get published with titles like Why is Conscientiousness negatively correlated with intelligence? I don’t think that’s the case.
Could you give examples of problems like this?
I think will give three examples of problems with whom I stayed over longer time: Spaced repetition learning, polyphasic sleep and quantified self.
Quantified Self is the example where I have the most to show publically. I did community work in QS. My name is in a dozen mainstream media pieces in a total of three languages. Piece means either newspaper, radio or TV I did all of them multiple times.
Spaced repetition learning would be one problem which is extremly important but has very few people who are working on it.
The Mnemosyth data lies around for years without anyone analysing it. Going through that data and doing a bit of modeling with it should be easy for anyone who’s searching a bachlor thesis for computer science or otherwise seeks a project.
Another question would be: How do you calculate a good brainperformance score for a given day given Anki review data? (Anki stores all the review data internally in a SQL database)
You don’t need to be a genius to contribute to any of the those two issues. Both problems are pretty straightforward if you can program and have interest in modelling.
Polyphasic sleep is a problem where I would say that I contribute to the discussion. I tried it probably 8⁄9 years ago and I stayed with the problem intellectually. Last year a friend of mine was trying uberman for a month and in researching the topic he came about something I wrote. When talking with him about the topic he quoted one of my online opinion on the topic to me and at first it surprised me because I haven’t made that point in his physical presence.
My highest rated answer on skeptic stackexchange is also about the uberman shedule: http://skeptics.stackexchange.com/questions/999/does-polyphasic-sleep-work-does-it-have-long-term-or-short-term-side-effects/1007#1007
It’s not like I contributed a breakthrough in thinking about polyphasic sleep but I did contribute to the knowledge on the topic a bit.
It’s a real pain to, though, because it’s so big. A month after I started, I’m still only halfway through the logs->SQL step.
That sounds like you do one insert per transaction which is the default way SQL operates. It possible to batch multiple inserts together to one transaction.
If I remember right the data was something in the size of 10GB. I think that a computer should be able to do the logs->SQL step in less than a day provided one doesn’t do one insert per transaction.
I believe so, yeah. You can see an old copy of the script at http://github.com/bartosh/pomni/blob/master/mnemosyne/science_server/parse_logs.py (or download the Mnemosyne repo with
bzr
). My version is slightly different in that I made it a little more efficient by shifting theself.con.commit()
call up into the exception handler, which is about as far as my current Python & SQL knowledge goes. I don’t see anything in http://docs.python.org/2/library/sqlite3.html mentioning ‘union’, so I don’t know how to improve the script.The .bz2 logs are ~4GB; the half-done SQL database is ~18GB so I infer the final database will be ~36GB.
EDIT: my ultimate solution was to just spend $540 on an SSD, which finished the import process in a day; the final uploaded dataset was 2.8GB compressed and 18GB uncompressed (I’m not sure why it was half the size I expected).
Thanks for the round up! I thought that by “problems” you meant things like the millennium problems and friendly AI and couldn’t picture how average people could make any progress in them (well maybe some with dedication) but these make more sense. How easy it is to get funding for these kind of projects? I’m just wondering because these are a bit fringe issues still, but of course very important.
Quantified Self is in it’s nature about dealing with epistemology. It’s not certain that you will learn something about how an AGI works by doing Quantified Self but the potential is there.
A mathematical model of how human memory works that could be produced by looking at Mnemosyth data could also potentially matter for FAI.
FAI is a hard problem and therefore it’s difficult to predict, where you will find solutions to it.
It very much depends on the project. I don’t know how hard it is to get grants for the spaced repetition problems I mentioned. I however think that if someone seeks a topic for a bachelor or master thesis, they are good topics if you want an academic career.
The daily Anki score would allow other academics to do experiments of how factor X effects memory. If you provide the metric that they use in their papers they will cite yourself.
I don’t understand why anyone would want to work on the Riemann Hypothesis. It doesn’t seem to be a problem that matters.
It one of those examples that suggests that people are really bad at prioritising. Mathemacians work at it because other mathematician think it’s hard and solving it would impress them.
It has a bit of Terry Pratchett’s Unseen University which was created to prevent powerful wizards from endangering the world by keeping them busy with academic problems. The only difference is that math might advance in a way that makes an AGI possible and is therefore not completely harmless.
Could the fact that it doesn’t seem to have many practical applications is what attracts certain people towards it? It doesn’t have practical applications → it’s “purer” math. You’re not trying to solve the problem for some external reason or using the math as a tool, you’re trying to solve it for its own sake. I remember reading studies that mathematicians are on average more religious than scientists in general and I’ve also gotten the impression that some mathematicians relate to math a bit like it’s religion. There is also this concept: http://en.wikipedia.org/wiki/Mathematical_beauty
It could be that some are just trying to impress others but I don’t think it’s always that simple.
And to my knowledge, there is some application for almost all the math that’s been developed. Of course, if you optimized purely for applications, you might get better results.
Yes, you are right it’s more complicated.