What are you working on?
Whpearson recently mentioned that people in some other online communities frequently ask “what are you working on?”. I personally love asking and answering this question. I made sure to ask it at the Seattle meetup. However, I don’t often see it asked here in the comments, so I will ask it:
What are you working on?
Here are some guidelines
Focus on projects that you have recently made progress on, not projects that you’re thinking about doing but haven’t started, those are for a different thread.
Why this project and not others? Mention reasons why you’re doing the project and/or why others should contribute to your project (if applicable).
Talk about your goals for the project.
Any kind of project is fair game: personal improvement, research project, art project, whatever.
Link to your work if it’s linkable
- 3 Mar 2011 16:22 UTC; 8 points) 's comment on Are You a Paralyzed Subordinate Monkey? by (
- What are you working on? April 2011 by 7 Apr 2011 17:56 UTC; 7 points) (
- 12 Mar 2011 23:59 UTC; 3 points) 's comment on What other causes are relevant to LessWrong? by (
I am writing a few papers and a book on machine ethics and superintelligence.
My goals with this work are to:
Summarize the existing literature from machine ethics on how to design the motivational system of artificial moral agents (a surprisingly little-discussed problem so far; probably less than 5,000 pages in the academic press!) and apply it to the specific problem of superintelligence.
Update and strengthen the Good / Chalmers argument for why a superintelligence is likely to arise within a few centuries if global catastrophe or active prevention do not occur.
Explain in detail why a few dozen commonly proposed “solutions” to the problem of Friendly AI will not work. (Basically, catch everybody up to where Eliezer Yudkowsky was as of about 2004.)
Translate the contributions of the SIAI community to machine ethics into the language of mainstream philosophy and science, to give SIAI more credibility and attract more elites to the cause of solving the Friendly AI problem.
I program by day, and program some more by night. I just finished Go Scoring Camera, an Android app that does computer vision to interpret Go boards, and I’m starting on Keyboard Builder, a tool for customizing the Android Phones’ on-screen keyboard. I’ll keep writing phone apps until they’re sufficient to provide a passive income.
I think I mentioned it before, but you could go further with the Go computer vision thing—Ken Thompson considerably improved OCR results for scanning chess books by adding in domain-specific knowledge (about chess): http://doc.cat-v.org/bell_labs/reading_chess/
+1 for “I program by day, and program some more by night.”
He’s a programmer, and he’s okay; he hacks by night, programs by day...
To expand my Go comment (did I say this somewhere else? I feel like I did but I can’t find it), what I mean is that you could generate possible configurations and rank them by their semantic content
For example, you could feed each possible configuration through GNU Go, asking it to score the configurations, and pick the configuration which is most ‘intelligent’/likely to be produced by strong scorers. Or, you could instead tell GNU Go that ‘black is a 20 kyu player, white is 1 dan’ and rank configurations by which configuration is most consistent with such a differential (configurations with stupid moves by black being far more likely than stupid moves by white, and the converse).
Making bayesian statistics easier and more accessible by coding advanced sampling algorithms for PyMC
Some background: I took statistics in high school because it seemed vaguely useful. Unfortunately, the material seemed very dry and involved mostly memorization and few general principles. It was boring and limited. College statistics was the same thing. I did some internships and statistics seemed very useful for figuring things out, but I didn’t know how to do very much.
Later I started reading Overcoming Bias, and Yudkowsky kept mentioning this thing called “Bayes theorem” and how it was really powerful. I read a stats book on Bayesian Statistics and my mind was blown. The statistics that I had been taught was a collection of formulas that gave answers but not much insight, but Bayes theorem encapsulated not just all of the statistics I had learned but the very notion of “learning from data.” I was hooked.
Later I figured out that the curse of dimensionality makes complex problems difficult (even though the simple statistics taught in stats classes now are still easy).
My project: Bayes theorem provides a simple coherent framework for learning from data. It massively clarifies how to think about data. It is something all engineers (and technical folk in general) could and should know. Not only is Bayesian stats very practical, but it turns a topic that even nerds find confusing and boring into something elegant and interesting.
I want to make fitting bayesian models as thought free as possible. Calculating the posterior distributions for your models is often very difficult and usually the most constraining issue. This is often true (though less so) even if you know a great deal about statistical computation.
As I have discussed here, I think the current lowest hanging fruit is the use of gradients and higher derivatives in algorithms for sampling from the posterior distribution. Thus my project for the last year + has been to work on improving PyMC, a Python package for doing Bayesian inference, adding gradient information, writing advanced general sampling algorithms found in the literature, improving the syntax of PyMC to make it simpler, more intuitive, and more powerful.
On my blog I linked to a package I built with a sampler that works using Langevin dynamics (uses gradient information), but more recently I have found that Hybrid (or Hamiltonian) Monte-Carlo is practically much simpler and works much better. This is my Hamiltonian MC implementation.
I am currently trying to improve my HMC sampler, and working on making PyMC faster and easier to maintain and extend.
If you know Bayesian stats and have some programming skills, I invite you help me improve statistical computation! just message me!
Why in python instead of R? R is used much more widely among people actually doing statistics, as far as I know.
I know, but R is really really terrible, and I hate working in it while Python is a joy to use and develop.
out of curiosity, what don’t you like about R?
The biggest thing is probably it’s lack of good debugging tools. Their tracebacks are not very informative. Their handling of arrays is significantly inferior to NumPy. For example, R has a tendency to have separate functions for applying functions along different dimensions of an array whereas numpy almost universally has uses an argument to specify along what axis to apply a function. Also doing much of anything non-statistics is a royal pain.
What about your work in economics to develop a stronger intellectual justification for quasi-monetarist policies?
I would describe it more as compilation/explanation of existing theory in a more accessible and non-ideological way, but yes I am doing that too. I am a bit less optimistic about that though, because it’s a big complex topic and my explanation skills aren’t really that fabulous.
Which one?
It took me too long there to realize that you were asking which book, rather than which mind.
Er, you were asking about the book, right?
L.O.L.
It was Bolstad’s Bayesian Statistics, but I now recommend Sivia’s Data Analysis: A Bayesian Tutorial, as I mentioned in the textbook thread.
I’m writing the core of a system similar to Google Wave, but much simpler, more flexible, and mathematically elegant. It works peer-to-peer, too, which is nice. It’s got some clever algorithms in it, and even cleverer data structures, and I’ve been through several iterations of the design and code. It’s probably the most complex thing I’ve ever written, and it could be pretty darn handy in the future. This is a surprisingly difficult problem to solve well.
(I also did a quick detour one-off project: pwstore, for anybody wanting to store passwords in Haskell. I feel pretty good about this one; it’s useful.)
I’m also writing a NaNoWriMo book, which I’d left half-finished back in November, because it actually was a lot of fun to write. I’ve got one wonderfully snarky character; if I give her first-person narrative, the story becomes extra fun. I don’t even know how many words I’m up to now. It’s a sprawling fantasy adventure thing.
I loved the idea of Wave, and was sad that it didn’t catch on well. I’d love to hear more about your project and/or hear about it when it goes live.
I actually have a demo of an earlier version up on a nameless Amazon AWS micro instance, built with node.js and redis and a rather terrifying amount of JavaScript code:
http://ec2-50-16-79-81.compute-1.amazonaws.com/notepad/931ab120-b66b-4275-a20c-e4d7b70b4c62
There are some weird browser compatibility issues, but it works in vaguely recent versions of Chrome and Safari and Firefox.
There are two parts to it. The core of what I’m making is, essentially, a synchronized string data type: a string where changes are immediately applied locally, and propagate across the network. The other part, necessary for actually using the thing, is making an application which maps documents onto strings. That’s not really hard so much as it is irritating, so I won’t talk more about it.
The synchronized string part is actually pretty cool. You apply some arbitrary patch of insertions and deletions, and send this out over the network. No matter what order these patches are applied in, they’re guaranteed to converge to the same ultimate result—it’s a lot like using a distributed version control system, except that you do a branch and merge on every keystroke, in a way that isn’t slow and doesn’t break horribly. :-)
You get the complete version history with the document, along with per-character records of who inserted it. If you want to have a server holding the canonical version of the document, you can do that, or go serverless. Technically, it’s based on the really clever idea of causal trees (PDF) invented by Victor Grishchenko; I’ve just been adding refinements so it goes fast and doesn’t involve terrifying regular expressions.
Since you mentioned Haskell...
What’s the connection to Darcs? A faster less-powerful version since branches won’t be long-lived and not all that much commutation should be necessary?
It’s pretty similar to Darcs-style patch manipulation. I’ve only skimmed some introductory material on exactly how Darcs does it, but it looks a lot like what Wave does:
Make patches invertible.
Define a function
T :: Patch -> Patch -> Patch
that transforms one patch to apply to the document after a second patch has already been applied to it.With these primitives, you can arbitrarily re-order patches. That’s not quite the way I’m doing it, though. I represent the document as a tree of atoms, each one with a unique id and a pointer to a predecessor. You get the document state by doing a pre-order depth first traversal of this tree, where siblings are traversed in an order that’s guaranteed to be the same on all clients, which I won’t define because it would take way too long to write out.
The patches are forests of atoms, and they get anchored to one or more nodes in the document tree. Atoms are never deleted, but they can be made invisible by attaching a special deletor atom to them. This guarantees that patches in the future will always be able to find their anchor points. I’ve tried both this and a system closer to Darcs’, and this was simpler to understand and implement.
In terms of power, this is equivalent to what Darcs does with permuting patches, and it can handle arbitrarily long-lived branches. My system’s patches commute naturally without any transformation, no matter how far document states may have diverged, which is a plus.
As for the speed, all operations are at most linear in the size of the document, with small constant factors, and I’m working on making the common cases all happen in logarithmic time, using monoid-cached trees—the finger tree is a prominent example. I don’t know how this compares with Darcs, but I wouldn’t expect them to be asymptotically faster here.
If I had more time and wasn’t so focused on just getting this done so I can get out of grad school, it would be interesting to make a DVCS based on this.
That is pretty neat.
Neat! Thanks for the info.
I’m working on editing Shikamaru vs. the Logical Fallacies, a rationalist fanfic that I linked to here, as well as encouraging the author to create more. We recently finished Part 1 of Chapter 1.
I’m doing this to increase my and the author’s rationality, to practice my editing skills, to help the author practice his writing skills, because it’s fun for me and the author and the readers, and to raise the sanity waterline as much as possible, since this is a chance for readers to also increase their own rationality. Readers can assist in all of these goals by reading the fic, reviewing it, discussing it, and spreading it.
I’m writing a blog post every day in order to cultivate a writing habit, create a useful body of introductory material for the subfields of philosophy I find most worthwhile, and perhaps make a good impression on future employers (I’m still a college student and don’t have the competence-demonstrating portfolio I’d like).
Edit: my blog is located at http://www.wrongbot.com
link it !
Yours is one of my new favorite blogs. I love your teeny tiny simple posts, and am linking to many of them.
I’ve noticed, and I really appreciate it. Glad I’m doing something that works.
This is a great idea for a thread! We should have these at regular intervals.
I’m working on turning my PhD work into a product. The idea is to take logic-based specifications, expressed as structured english, and produce working information systems (such as web apps) from them. I have a prototype which got me the PhD but I need more work to get to a viable product. An intermediate step I am focusing on at the moment is using this technology to write validation checks for existing large datasets to pick out errors/outliers. So someone would apply the rule “Each patient with diabetes type I must be prescribed insulin” on a relevant dataset and the system would pick out cases where this does not hold. Normally this would need a complex SQL query that must be written by a developer and cannot be verified by the domain expert.
The other thing I’m working on is nurturing my addiction to the 17x17 challenge. This is my first forray into ‘serious’ math and I’m finding it extremely addictive. I usually dive into this when my motivation is low, and lo and behold, I’m motivated again! Having spent 7 of the last 14 months on this, still no solution, but I do think that if the filters I have work right, I will have trimmed the problem space I’m focusing on to about 52 cpu-days. Now I’m working on implementing these filters and speeding up my primary algorithms to reduce that further.
Finally, my perpetually on hold project is a social news community that provides a personalised experience, is spam-free and requires no moderation. Some of the ideas were expressed in my optimization by proxy articles. The conceptual framework is complete, but coding and productising is too much of an energy draw at the moment, so I’m leaving it aside or pushing it forward at a very slow pace.
If anyone wants to know more about any of the above, just ask.
I’ve been thinking about making this a regular occurrence as well. ~ Monthly seem about right?
Is motivation for your PhD related project is primarily to facilitate “evidenced based decision making”? Also, do you have a link?
Monthly is great! It’d be great to have regular progress updates from all the people in this thread. I’m part of a group in London that has biweekly meetings to do just that, and I think it’s a great way of motivating people.
About my PhD stuff, I can point you to papers, but I’m not sure that’s what you have in mind. I do have a software prototype but it has not been released as of yet. So far, I have been demoing it to interested people over skype. (I should probably record a screencast. You know what, I’ll take that as an action item for the next thread.)
The motivation initially was nothing particularly noble. As a web developer I got sick and tired of rewriting forms over and over so I went and made a kind of declarative model-driven solution. Then, instead of polluting it with processes and workflows, I built a constraint-based system on top of it. (6 years in 2 sentences!) It’s grand mission, if it has one, is to break people out of the process-based way of designing systems.
The application I’m working on right now though, has been mentioned as a way to help decision-making by someone I was talking to. Particularly, applying these rules to real-time data streams (such as startup dashboards) and flagging situations that need decision maker attention.
Another similar theme I could imagine would be to gradually build a model around a datastream and have the system inform you if any of your assumptions are broken by the data. (in which case you’d need to revise your model).
Will give a bit more thought to this “evidence-based decision making” concept, thanks.
I now plan to do a thread like this once a month on the first Thursday. If they don’t seem popular, I will stop them.
I’m working on tightening the bounds on the Jordan-Schur theorem. I’ve improved the best known bounds but not by much. If this project does succeed it might end up turning into my PhD thesis.
Out of idle curiosity (I haven’t studied any proof of the Jordan-Schur theorem), are you doing that by tuning up the existing proofs of best bounds through more careful analysis, by replacing relatively large chunks of those proofs by new arguments, or by employing an altogether new proof?
Right now, I’m focused on tightening a specific lemma that turns out to be of independent interest and is used in a few other contexts also.
For a given n x n matrix X with coefficients in C, let ||X|| denote the Frobenius norm. (The Frobenius norm is essentially just Euclidean distance where one treats each matrix coefficient as two Euclidean coordinates, one from its real part and one from its imaginary part.)
Lemma: Let A and B be unitary matrices, and let C be the commutator of A and B (that is, C=ABA^(-1)B^(-1)). Then || I—C ||^2 ⇐ 2|| I -A ||^2 || I -B||^2.
This Lemma is a standard step in the proof and it turns out that the strength of this inequality is one of the major limiting issues on improving the bounds. So I’ve been working on tightening that lemma. I’ve been somewhat successful in improving the lemma with the same hypotheses (i.e. just that A and B are unitary), but it turns out that for purposes of Jordan-Schur, one can without any trouble assume that A and B themselves generate a finite group, so I’m trying to see if I can substantially improve things with a version of the lemma that uses that additional assumption.
1) Expanding my programming capabilities. This is because it’s vital if I’m ever to develop one of my other big ideas. I can come up with algorithms and models pretty easily, but am not at the stage where I can put them into a user-friendly, non-command-line format, or build off of others’ code (which rarely comes with easy compiling instructions).
I’ve tried various frameworks, but they’re all hard to get started on—more so than the inherent difficulty of programming would suggest. I follow the instructions just to get it set up, but they always leave out something vital so that I have to get an expert to look at why it doesn’t set up right. I’ve tried Django+Python, the Android SDK, Matlab (incl. with Simulink), and .NET, becoming most proficient with the last two.
It’s not all gloomy, though. Some successes: completing a useful internal development project at work that involved setting up a GUI to allow for easy database lookup and clean presentation of data. In technical languages, I’ve set up something similar but for signal processing and automated generation of relevant plots. I was able to modify a Firefox extension so that I could deftly browse websites from the keyboard alone with one hand (no, not for “that”). Could only figure out how to do it on my Linux box though.
2) “Infiltrating” an anti-rationalist group by purporting to be “one of them”. I’m doing it to see what makes these people tick: to what extent they really believe this stuff, how well they achieve their goals (vs. failing in a roundabout way), what kind of useful signaling functions the group can serve, etc. (Obviously I can’t go into more detail here, but feel free to PM/email me.)
3) Learning about cryptography (by reading Schneier’s Applied Cryptography), hopefully to assimilate it at the 2+ Level. I find it important because it both has practical application, and bears directly on fundamental questions applicable to many circumstances: under what conditions you can make what inferences about a source or the meaning of something; where problem complexity/difficulty can originate, and how to increase or decrease it; what is randomness, etc.
It also trains you to truly understand and represent a system well enough to know its strong and weakest links. I’ve been impressed at seeing how people with experience in crypto are able to carry its concepts over to a fuzzy, real-world situation and justify counter-intuitive conclusions about why a given practice with actually increase or decrease vulnerability.
4) Experimenting with brain-machine interfaces (got an NIA and the Force trainer, don’t laugh). This is because I have an interest in improving interfaces, and want to see what the potential is for removing a major bottleneck in interacting with a computer, or overcoming difficulties. (One idea is to combine NIA with Dasher.) I think there is great potential, both for brain exercise, and for extending expressive capabilities, though a direct brain link.
I’m not sure Applied Crypto is a brilliant place to start. Practical Cryptography is in many ways a kind of apology for the sorts of mistakes that people make after reading Applied Cryptography. Though they do have a role, people don’t appreciate the extent to which we care a lot less about warm fuzzies than about what you can prove, whether that’s eg a security reduction or resistance to differential and linear cryptanalysis.
Thanks for the pointer!
What are the projects that you want to do that require programming? It sounds like they require a web interface ?
Some do, yes. Generally, anything that I’m going to want to make easily accessible to others for use will probably require it. I have lots of ideas I want to implement, but they mostly involve data-mining and machine learning, which would involve applying a routine to large datasets, and I want to be able to build off of others’ work. Basically, implementations of an inference engine.
(One “toy” program I wrote was a program that generates a Markov model of given body of text—i.e., for a given string length, collect all strings in the text of that length and find what characters are likely to come after instances of that text, and then randomly generate a new text from a given string. I know it’s not a new idea, etc., but that was just to get my feet wet. And I was limited to command line and reading in a local file.)
Also, I want to model control systems from an information-theoretic and thermodynamic perspective: what energy flows and entropy generation happen in order to keep a system stable, and perhaps replicate.
Being able to do more work with the brain signals requires more programming knowledge too.
Those things don’t really sound like they require a web interface, but just sharing of your code. I can easily imagine that building a web interface for a technical program involves several non-trivial steps (especially dealing with libraries), because that’s not that common of a combination. I suggest you stick with learning the programming part first and handle the sharing part second. After all, once you have something that works in one language, it’s not usually too difficult to translate it into a different language if something else seems more appropriate.
I personally think that Python works really well for technical computing (or for non-technical computing for that matter). NumPy (the array package) is much better designed than other packages in other languages. It’s also very easy to share your code (PyPi is the standard place to place your package). So toss the django for now.
I enjoy giving advice on this subject, so feel free to ask lots of questions (if you’re interested in the answers anyway).
I am thinking about the design of quantum money, quantum copy-protected programs, and quantum program obfuscation. Basically, understanding whether and how quantum computers can implement the strongest possible cryptographic operations (all of which are known to be classically impossible and to have a wide range of applications).
I am working on the development of collaborative filtering / recommendation protocols for large communities. This includes, for example, a system for aggregating product reviews which is robust to the presence of large numbers of planted reviews, or a system for spam filtering based on trust which remains robust in the presence of a supermajority of sophisticated spammers. More generally, this work seems like an important first step in the development of automated and robust systems of trust.
I started working on these projects because they were interesting problems I was well equipped to work on which seemed likely to yield publications in time for graduate school applications (optimistically, these positions have been born out in both cases). I don’t recommend this decision making process.
I think automating trust and designing better recommendation systems are more important than the overwhelming majority of theoretical problems currently studied, but I have realized more recently that there are more important issues to deal with. I am continuing to work on these problems now because of inertia, the fallacy of sunk costs, and a desire for status.
I’d be interested in your work on recommendation systems. How well does it deal with semi-intelligent spammers? That is spammers that copy other normal people’s ratings for the most part but then alter there behaviour to prefer something more than it is worth.
Personally I think a good recommendation system is something that can have a vast impact on society. Mainly through knock-on effects; if you save charity X time and money on deciding which programmer to employ (due to a recommendation system) they can then spend that money on actually helping people.
I’m interested in what you think is more important!
The most obvious thing distinguishing my work from previous attempts is that it attains all its guarantees even if 99% of all users are arbitrarily sophisticated adversaries. The amount of time depends on the logarithm of the fraction of honest users. So if only one user in 10^10 is honest, it will take about 30 times longer to converge to good recommendations.
The goal is to find the best fixed moderation strategy (ie, block all messages from people X, Y, Z...) for the honest users. Here the definition of honest users is arbitrary: if you choose any subset of the users to declare honest, then over a reasonable time frame the recommendations produced by the recommender system should do as well (using whatever scoring metric you use to give feedback) as the best fixed moderation strategy tailored to those users. Note that the system doesn’t know which users are “honest”: the guarantee holds simultaneously over all choices.
Right now I have a protocol which I believe converges quite quickly (you only get a polylogarithmic number of “bad” recommendations before convergence), but which is computationally completely infeasible (it takes an exponential amount of time to produce a recommendation, where I would like the amount of time to be significantly sub-linear in the number of users). I’m optimistic about reducing the computation time (for example, I have a linear time system which probably works in practice; the question is whether there is any way for an adversary to game the approximations I have to use).
If you want another pair of eyeballs, I’ll have a look.
I am very interested in your “collaborative filtering / recommendation protocols for large communities” work, as I’ve been toying with similar ideas for a while. If you could say more about your approach or can link to anything I could read, I’d appreciate it very much.
I’m in the early stages of my PhD research in metabolomics, which specifically means that I’m constructing what will end up being a library of tagged proteins to use in probing protein-small metabolite interactions. While the genomes and proteomes of quite a few model organisms are well-understood, cross-pathway regulatory interactions are another extremely important factor in metabolism, and these have had only minimal effort put to characterizing them. My work is therefore aimed at finding and characterizing these interactions on a broad scale, and incorporating that understanding into existing computational models in order to facilitate rational design in biochemical engineering.
I also have a fanfic project going that I started to 1) get better at writing; 2) challenge or avoid the most painfully prevalent tropes of the particular fandom (Labyrinth); and 3) to keep me somewhat sane. I found Eliezer’s HPMoR not too terribly long ago (was my favorite method of procrastination while studying for quals, in fact...), and that’s given me perhaps altogether too many ideas for other potential study distractions after I finish the story. http://www.fanfiction.net/s/6347718/1/bTalespinner_b
I read Talespinner. I love it. (It includes perhaps the best prose kisses I have ever seen, among other charmingly evocative description and engaging characterization.) Please do not fall into the commonplace trap of abandoning your fic in the middle to work on something else. I want to read the rest of it. Also read and liked Olive Branch, though not as overwhelmingly much.
Thank you! I am pretty bad about abandoning free-time projects when a new idea comes along, but I’ve promised myself and a few other people that this will be finished come hell or high water. (And I have a test-reader who gives me no peace when I’ve been taking too long on an update...)
Investigating programmer productivity. Learning about the history of the programming profession in general, of the “software engineering” meme in particular, and detailed history of the “agile” meme. Learning some R and stats on the side.
Why—because there is a lot of leverage in looking at the question of why programming skills are what they are, and how they could generally be better, compared to just improving my own. There seem to be some low-hanging fruit, such as encouraging programmers to be more aware of cognitive biases affecting their work, or of the nature of probability and tools for working with it.
I enjoyed this article when I saw it on HN. This is an area which really needs more/better research, so thanks for looking into it.
Thanks! I have a longer follow-up (6K words), which I’m not publishing yet because it might end up as one part of a point-counterpoint feature in IEEE Software’s column “Voice of Evidence”. If you message me your email I’ll be glad to send you a draft.
I am trying to understand personal interaction in a semi-technical fashion in order for me to have mental models which are intuitive enough to me which can guide me in casual (and, later, business) interaction. I’ve read through some of the OB posts on status and I’m currently reading the book Impro (which Robin Hanson recommends here). I’ve also partially made it through Dale Carnegie’s How to Make Friends and Influence People. Book suggestions are very welcome.
Robert Greene—The 48 Laws of Power. About the same level of cynicism as Hanson but more instructional. Very entertaining. :)
A problem with knowing about such ‘laws’ is that you’ll start perceiving false positives in other people and will hate them for using those techniques against you.
I don’t consider the willful maintenance of false illusions to be sustainable way of securing oneself against disillusionment.
I note that if you start hating foks for using those techniques against you then you are probably already ignoring Greene’s advice. He consistently advises against taking things personally while also describing just how ‘hating people for doing X’ is itself a manipulative technique for gaining power.
I don’t think that I understand everything that you’re saying.
This is a valid concern, but I’m not sure how it relates to this:
Regardless of whether you are vulnerable to such techniques, I don’t see why hearing arguments for their existence would harm you. (I can see that it would be a waste of time if they turned out to be useless, but from what little I’ve read of him, it would be a mistake to gamble on the hypothesis that they will be useless by not reading him at all.)
I said it is a problem, a potential bias that needs be to countered. It was not my intention to suggest that one shouldn’t learn about unethical persuasion techniques or the like. I actually ordered the book.
A few times I was accused of, and saw people on LW accusing others of using some kind of ‘forbidden’ rhetoric against them while I never even heard about such a technique before and which I was sure the person who has been accused never intended to deploy deliberately. This shines a bad light on people who have been accused. The right way would be to kindly remind them of the shortcomings of their argument or that their style of response might be harmful in a discussion with the purpose of dissolving confusion, refining rationality or understanding disagreement.
Really? From what I’ve read of Greene’s books (while I was stayed in “User:”Cosmos’s room in NYC...), his general format seems to be:
1) Give gripping narrative of historical event.
2) Shoehorn the event to use as validation for some vaguely-specified “law” (“Don’t be afraid”, “act covertly”, etc.)
EDIT: And that can probably be expanded to:
3) When you have enough of these, combine them into a book.
4) In response to popularity, generate new books, scraping bottom of barrel as necessary.
Dorikka already mentioned reading through Robin Hanson’s posts on status. In terms of ‘Shoehorning’, if he can stomach Hanson I expect him to consider Greene altogether benign.
Please keep us posted on the results.
The forthcoming Tempo has potential as a guide to strategic interaction within an organizational context, based on the author’s insightful blog.
Non-book suggestion: I think it is likely that at a certain point practice becomes more important than reading. I’d be wary of spending too much time in the books and not enough in the field. My most successful means of getting lots of practice was direct sales, but this is emphatically not for everyone.
Any thoughts on how something like this could work in a group setting? I confess to being a little stumped.
I am working on a 2D adventure game that features some topics from lesswrong: rationality and recursive self-improvement.
I love making games and this seems like a good way to take the basic concepts we take for granted on LW (like AI going FOOM or cryonics) and present it in a different way, from a different angle, in a simpler fashion, in a different medium, to new audience.
Goal: make enough money that I can do this again. And again. And again. Continue making games about various lesswrong topics.
Popularizing rationality is harder than most rationalists realize. As a very high lower bound, I don’t think Eliezer is succeeding at it with MoR. I strongly encourage you to practice it with a project you can finish as soon as possible, like a blog-post-sized short story.
In general, people who want to succeed in creative endeavors tend to underestimate the importance of practicing through many cycles of making something and learning from it. It’s far too common to try to take on a large, long-term project right off the bat.
(The sequences generally succeed at communicating their ideas, but they’re aimed at an audience that’s already more intelligent, intellectual, thoughtful, and rational coming in, which is a different and easier task than popularizing difficult, complicated ideas.)
Thanks, this is a good precaution to take. My goal isn’t to popularize rationality per-se (I can’t speak for Eliezer, I don’t know what his goal is with MoR), it’s more to show various topics/principles/ideas introduced here in different light, not necessarily to defends them or to explain them. I think people playing a fantasy game are a lot more receptive to crazy/unusual ideas, so it will be easy to sneak those ideas past their radar. If I don’t succeed at it with the first game, at least I’ll still have a pretty good and fun game. Thanks for the word of caution.
By the way, as examples of very fast game development, you might want to look at Increpare, Klik of the Month, and Speed-IF.
I am working on methods for control design in nonlinear stochastic systems. In other words, given some sort of robot or other mechanical system, how can I do some amount of precomputation to then allow the system to solve a wide range of tasks in real time?
The general strategy for solving this problem is to patch together many locally valid control policies into a global control policy. This involves verifying that a given feedback law will accomplish a given task in a given region, which usually reduces to solving a semi-definite program.
However, these semi-definite programs can be quite large, so I am also studying methods for approximately solving large semi-definite programs.
Finally, I am working on a method for verifying that a control policy will accomplish its goal even in the presence of noise (so far most of our work has focused on deterministic systems; I am adapting it to systems with randomness).
I chose these problems because they seemed like the most interesting and important problems that were being studied by the research group I work with. Similar to paulfc, I am working on them partially due to inertia; I will probably not continue to work on them after I go to graduate school. However, I do believe that the problem of nonlinear control is extremely important. I just believe that there are other problems that are even more important.
I’m working on a suite of R functions for remote dispatch of R batch jobs with a function-like syntax. The code assumes that the remote and local machines share directories using NFS) (or similar) and have the same path to the working directory. Right now the code is complete and bug-free (ha! haha! hahaha...eugh) -- I’m just trying to sort out a problem with NFS client-side file caching on my specific system.
I’m also working on contract, writing C code for a Bayesian clinical trial design company in Texas. I actually want to get involved in the statistics, not just the coding, but I’ll take what I can get. Hopefully it will be a foot in the door.
At work, I’ve been working on motion detecting algorithms for video games, which involves a fair amount of statistics and machine learning. So I’ve also been learning a lot more about statistics, data mining, dealing with noisy data (we have some data that’s reliably tags, and a lot of data whose tags are likely to be wrong), comparing different methods, explaining these things to non-technical people, visualizing the data (distributions, scatterplots), etc. I’m still feeling like an idiot a lot of the time, but the system (most of it implemented in Python) works quite well, I just keep having the nagging doubt that there’s some algorithm out there that give better result, some analysis method that will allow me to see more in the data, etc. But with deadlines looming, I can’t afford too much analysis paralysis either.
I’ve also been asking questions on Cross Validated, and browing back through the answers of one of my old questions, I saw the best one was answered by a certain “John Salvatier”, and was thinking, “hmm, that name sounds familiar …”
On the side (I’m another one of those “program by day, and program some more by night” guys), I’m working on a little web-based game that’s not in a state worth showing right now, but is going along smoothly so far.
I’m about to start directing a community theatre production of Equus—auditions this coming Monday and Tuesday. I’ve been wanting to do this show for years.
I’m working on making music that doesn’t suck. Bleepy electropop, to be precise. Using LMMS, a cheap’n’cheerful open source clone of FruityLoops. I’ve been an avid record nerd for nearly thirty years now, it’s time I saw what I could do.
Current state: First demo. Something almost done. A minor diversion. I’m getting to know the capabilities and limitations of LMMS and Audacity (which is basically the four-track we all wanted twenty years ago as a computer program). I’m accumulating a huge pile of fragments, which need putting together into actual pieces. I’m still looking for the sounds I’m after and learning more about sound and how to make what I want—a goal in mind. I’m learning how to do a mix on crappy speakers.
Hard parts: I have no musicianly ability whatsoever. My girlfriend (who is one of these people who can pick up pretty much any instrument and get something usable out of it) insists I learn a few chords on guitar. Also, for a writer, I’m finding it ridiculously difficult to come up with lyrics that I don’t instantly think are awful.
Goals: Make something I think is good and other people think is good.
Why: because I feel like it and it amuses me.
Value to humanity: utterly minimal. Value to me: keeps me amused and out of trouble.
I didn’t think much of the salmon or hammers tracks, but Opus I had an interesting sort of melody going on; if there were a full-length version of that (rather than 20 seconds), I think I’d enjoy it quite a bit.
(headdesk) Just wait until Opus II, which will be about a spider and a waterspout.
From the previous −1 of my comment and your headdesking, I sense I am missing something here. Am I supposed to not like Opus I but like the salmon or hammer tracks, or something like that?
It’s “Three Blind Mice”. (And I voted you up!)
/relistens
Oh, so it is. I’m such a musical dunce.
I bet no-one’s ever written a song for you before.
What, like I read everything I see? You think I’m made out of time or something?
The sad thing is, even being told in advance what the melody is, I still have difficulty recognizing it.
I has a theme song! It even sounds a bit sinister! Well. Obviously I have to link that on http://www.gwern.net—it’s not every joe schmoe that has his own theme song, you know.
Changing the beat, the scale and the chords probably doesn’t help.
(Great artists steal. Therefore, theft is great art. This is fallacious, but you get more songs that way.)
Gwern titles such pretty music. Can I have one too? Maybe “White Coral Bells”, or if that’s hard to find, “Row, Row, Row Your Boat”?
If that’s “pretty”, I expect you used “Sack Of Hammers” as a lullaby.
I feel like Sure I’ll Draw That on Reddit …
(unmixed mp3, will change for the better tomorrow when I can mix it on the crappy speakers. Requests taken, 10 karma for a slice of brilliance from my fragments pile with a public domain melody over it! Roll up! Roll up!)
Edit: Now mixed. Though there’s still artifact problems in the 128kbps mp3 that just aren’t present in the source WAV file—that’s certainly an interesting problem I’d hitherto not encountered. Linked version is 192kbps, no (bad enough) artifacts.
Thank you :D
The last comment on the linked post said what it was!
That’s it. The next one will be about a spider and a waterspout and will be called “Gwern”!
If you are wanting to learn simple guitar, the single best resource I know of is http://chordie.com . This is just a collection of chords and lyrics for songs, as you’ll find all over the net, but the most useful feature is the ability to transpose the songs into different keys automatically (it also has settings for non-guitar instruments which were invaluable for me in learning the banjo, mandolin and ukulele).
I’ve been working on being more sociable. I’ve been talking to people in my classes, and doing work in the lounge for my major. I’m not as productive as when I work on my own, but I’m getting involved in small talk. I think someone here mentioned that much of small-talk is just about being enthusiastic and friendly—once I started looking at it this way it became much easier—it is actually pleasant for me now.
I’m about halfway through reading Social Cognition by Gordon Moskowitz, which is helping me gain a better understanding of cognitive biases.
I’ve also been writing explanations of science topics for my grandmother, among other things like “what is an internet community?”
I’m working on an iPhone game I started when learning Objective-C. It’s unknown (about 3000 free downloads/week) but loved (mostly rave App Store reviews), so I’m trying to make it go viral.
The version awaiting Apple review has incentives for introducing new players, and usage analytics to get a more accurate idea of what to fix and simplify. (Fan feedback is great, but they’re already converted & over the learning curve). I think I’ll still need to lower the learning curve/barrier to entry & boost the purple cow factor, but I’ll bet I see something in the analytics data I would never have considered.
Also, after years of staring at keyboards I’m finally learning to type, hopefully from tonight, straight to Dvorak, no labels. Any advice for doing this, or software recommendations for a Mac, would be most welcome.
There are a lot of typing flash games out there. It’s a fun way to combine relaxation/time-wasting and learning how to type.
How were you results with learning to type? Could you reccomend any methods?
I was learning Dvorak on a mac so there weren’t a huge number of options, but I found a typing tutor called Master Key with a Dvorak option and that did the trick. You can get it from http://macinmind.com/
There are lots of general Dvorak resources out there and more tutors for PCs if you have one. It’s has some quite dedicated followers.
Aside from coursework and teaching, blah blah blah, I’m working on my master’s thesis in statistics. I have data from a series of economic experiments that were originally analyzed using nonparametric methods that did not take the potential dependence structure in time into account—the data is several time series. I’m reanalyzing the data in a way that takes into account this dependence structure and including some model selection procedures. I’m considering several possible models, and at this stage I’m basically reading books and papers to learn about different potential models and Bayesian methods of analyzing those models, largely because I haven’t had a course in time series yet.
I’ve had a little success with two books collecting some of my blog posts so now I’m trying actively to turn my blog writing into longer-form work suitable for book publication. I’m currently working on three series of blog posts simultaneously, all of which I hope to turn into books.
The first, How We Know What We Know is on a lot of the things people talk about on here, trying to explain Bayes’ theorem, Kolmogrov Complexity and the scientific method to a lay audience. I’m doing this because a lot of my blog readers, especially those who enjoyed my book “Sci-Ence! Justice Leak!” (which tackled the same things more obliquely, examining comics and Doctor Who to make complementary points), are interested in this material but don’t have any idea where to start.
The second, Cerebus reviewed part zero and part one, is a discussion of the 300-issue comic-book series Cerebus. I’m doing this because I consider Cerebus a great work of art, but one overshadowed by its creator’s serious mental illness.
And the third, and most advanced, is a track-by-track review of every legitimately available Beach Boys song (top five results on that page). I’m doing this because there’s no book available that covers the Beach Boys’ whole career from a musical, rather than biographical, point of view, odd as that may sound.
I’m also writing a novel, but that’s not online.
My goal with these generically is to try to earn enough money from the writing I would be doing anyway (these are all the kind of things I would be posting to my blog anyway, just organised better) to be able to quit my job and work primarily as a writer, with some time over for research, programming and music.
Forgot to add—am also working on improving my Perl skills by trying to solve all the Project Euler problems.
Presumably, people you didn’t already know buying them ;-) What level is “a little success” on Lulu?
The Beach Boys one strikes me as having serious breakthrough potential. Though I have no idea what the market for physical books on music is like these days (it was not bad in the ’80s and ’90s).
I’m making about fifty pounds a month from the two books’ combined sales on Lulu—most of that from sales of ebooks, actually (I’ve been hampered by Lulu having poor ePub processing software, so I can’t get the ePub of Sci-Ence! uploaded as yet, but am selling a surprising number of PDFs). I also got my books uploaded as Kindle books last month, and have made about fifty pounds so far from those (averaging one sale a day when I have them at $5, and three sales per day when I have them at $1).
So assuming sales stay more-or-less level that means I can average £50 per month per book without any kind of promotion other than my blog. However, I’m hoping that by increasing the number of books I have available (and by having them in niche markets—both my books have topped the Kindle charts for their respective categories, despite low sales) I’ll get some kind of name recognition. It only needs one breakout success and I can make a significant amount of money. (There are people selling hundreds of thousands of self-published books a month, but they’re primarily writing pseudo-Twilight ‘dark fantasy’, and I have too much sense of shame to do that ;) ).
I hope the Beach Boys one might be successful, especially since I have some name recognition within the BB-fan community (I was very active in online fandom in the late 90s and early 2000s).
(I slightly miswrote earlier, BTW—there is one career-retrospective look at the Beach Boys’ music. Mine is significantly more in-depth.)
I’m working on (1) computer vision for augmented reality [my Phd], (2) machine learning for theorem provers, (3) building the Oxford Transhumanists, and (4) an iPhone app called relaxing stories
1) I build models of the wearer’s environment that include meaningful labels like “floor”, “wall”, “table top”. This will hopefully help us leverage 3D models for figuring out where objects, people, events, and interactions are happening and where they are likely to happen in the future.
2) Should theorem provers use training data? I think so. Theorem proving is provably impossible in general, humans are good at it because we learn to recognise patterns. I’m transforming the ATP problem from an abstract search problem over proofs to a black-box that searches greedily, asking at each point which direction to search next. The problem is then to learn which patterns indicate a fruitful search direction by examining a large corpos of previous proofs.
3) http://www.facebook.com/group.php?gid=265828309853
4) http://relaxingstories.com/
I am working on a new approach to epistemology—essentially an extension of Frege’s context principle—that explains the nature of meaning and existence. I am finding that it provides traction on many problems in epistemology, linguistics, rationality and philosophy in general. I also believe that it provides traction on problems in more practical fields like physics and computer science.
I think that my approach to epistemology is pretty cool, that it is essentially a “Theory of Everything”; however epistemology is not my field of training or experience—and it seems that nothing is really “new”—so I am researching the literature. And yes I suppose that you can say there is related literature, in that essentially all literature in some sense explores meaning. I am reading related material in Buddhism, philosophy, ethics, epistemology, linguistics, semiotics, logic, theory of computation, mathematics, and physics. Much of this material is steeped in domain specific language; to even understand the abstract of many papers I need a deep background in the topic. My only real hope is to thread my way through this material looking for key concepts—restricting my detailed analysis to sources that effectively add to my network of knowledge.
The research is paying off. My original topic of research was loosely “meaning is context dependent”. Certainly this idea is well represented in the literature; there are forms of contextualism in ethics, logic, mathematics, epistemology, and linguistics. There are related philosophies like relativism, pluralism, perspectivism and nihilism. Although contextualism in these fields is clearly a contender for explaining the nature of truth and meaning, there are also valid criticisms of it. It is clear that the traditional approaches to contextualism are flawed; and yet I still believe that the core idea is valid.
I still have much more work to do, but I believe that I have found an approach to contextualism that fixes its traditional failings; but since this approach is potentially (relatively) novel I now need to worry about my intellectual property agreement with my employer. This is an unfortunate barrier to my progress. To verify and extend my ideas I need to expose them to public evaluation and criticism—but my company has a broad definition of intellectual property and insists that I go through their invention disclosure process—seeking permission to publish from them; permission that they will only grant if they are convinced it is in their interest.
This sounds very interesting, both philosophically and from a computer science perspective. If there is any more material I could read, please let me know, publicly or privately.
Related to computer science:
Read Martin Ward’s Language Oriented Programming (1995). You will be interested in the idea of having domain experts designing languages instead of programmers. The key take-away from this paper is the idea of using domain specific languages to achieve the separation of concerns.
Charles Simonyi’s intentional programming contains a key related idea—stop thinking about code as text, think about it as an interface.
Aspect-oriented programming contains the key idea of cross-cutting concerns.
If you can imagine a world full domain specific languages designed by domain experts, where the languages are not text and where programming is all about the process of integrating cross-cutting concerns, then you will start to see the future of software as I do.
Related to philosophy and epistemology:
I am a neophyte in this field, so any claim I make may be wrong. I am also still just scratching the surface with my research—every day I find new threads that I need to follow to build my network of knowledge—as a result I don’t have a nice concise list of references to provide you.
My starting thesis was that context creates meaning and in its absence there is no meaning; domains can only be connected if they have contexts in common. Common contexts provide shared meaning and open a path for communication between disparate domains. I have been calling this the context principle.
There is plenty of prior art in the literature for this position, generally known as or associated with contextualism—although it appears under other guises as well. I have a new formulation of this thesis which I think adequately addresses the traditional criticisms of contextualism; but as I mentioned, for now I feel constrained to reserve it.
For the purpose of this response I think I can stick to my original thesis, and to other material found abundantly within the literature.
Gottlob Frege originally coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as “We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition.” Although I believe that my definition for this term gets at the essence of Frege’s usage, I don’t claim that it matches his intent.
Frege’s definition has two problems; it attributes meaning to usage and it posits that a proposition is a context that can provide that meaning. These perspectives are prevalent even in contemporary literature and have resulted in weak forms of contextualism.
Attributing meaning to usage might imply that it is the intent of the speaker that sets the meaning of a message. It is as if meaning floats along with the message to be absorbed by the receptive listener. In fact the meaning of the message can only be found within the speaker as he creates (encodes) it, and within the listener as he decodes it. The meaning that the speaker attempted to encode into the message may be very different than the meaning that the listener obtained from decoding it; each action is performed exclusively from the context of their unique perspectives.
In my version of the context principle, the context that creates meaning is always a “mind” in some sense. It isn’t the proposition that sets a context for meaning, but a person’s understanding of that proposition when they decode the related message.
This idea is expressed by Charles Sanders Peirce (as found here):
...
...
He believed that in some sense these quasi-minds permeate our reality, and that all meaning is formed through a process of inference.
In this discussion I am having with the user TheOtherDave I refer to this as a “chain of inference”.
My claim is that the meaning we attribute to something depends entirely on this chain of inference and does not exist independently of it. We don’t evaluate reality directly, we evaluate our observations of reality; this is a process with layers of interpretation.
For example, we don’t observe a squirrel directly; we observe photons that have interacted with the squirrel. Actually our “observation” of a squirrel is the result of a chain of inference performed by our brain in response to the sensory input—based on our prior experience.
Squirrels do not exist outside of our mind. I am not arguing for solipsism, I am arguing that the human mind is the only context that the squirrel abstraction has a meaning within. The stuff that makes up a squirrel certainly has existence within our common context of physical reality; but I’m sure that physical reality holds no meaning for that stuff in a way that is equivalent to our “squirrel” abstraction.
From this perspective constructivist epistemology is more valid than scientific realism—although I would not strictly adopt one perspective over the other. Mathematics, atoms and quantum mechanics are abstractions that exist from certain perspectives, but it would be false to claim that they exist in any absolute sense.
I am working on an open translation memory web service. Instead of writing my thesis.
Our system takes a document and its translation, and it gives back a set of sentences, each together with its translation. Users can upload such document pairs, and other users can then search the resulting corpus for words and phrases in either of the two languages.
The core of the system is my hunalign sentence aligner algorithm. But the core is not that important actually. The trickiest parts of the system are all trivial from an algorithmic perspective (document processing, interfacing with 3rd party libraries, etc).
Why am I doing this? Because I feel that this is a useful service. I have some ideas about how to monetize it, but I will not be disappointed if these don’t work out. Also, this work is more fun than writing a thesis. (My thesis is already at the phase where no new algorithms and experiments are needed, it really is just writing, but a huge amount of it.)
I’m working on developing a logically consistent and reality-based method of calculating pension liabilities. Currently in the US, pension liabilities are computed differently for public pension plans funded by taxpayers vs. private sector pension plans funded by corporations. I think neither current method makes sense.
Financial economists argue that pension liabilities should be discounted at risk-free rates based on the probability of their being paid, using a model that treats accrued pension liabilities like a security. I disagree, and I’m conducting an experiment to see whether a complete proof for the financial economics position exists—with no gaps. All the relevant articles I am aware of are basically hand-waving.
Coming out in opposition to the current financial economics argument is a necessary prerequisite to putting forth my model, which I claim is also based on financial economics.
My current project is Landwelded.
It is a reader-driven webcomic thing inspired by the MS Paint Adventures franchise by Andrew Hussie, in particular his latest installment, Homestuck.
It’s about six young bayesians in a semi-weirdtopia, who may or may not at some point play a videogame together.
I’m working on
1) Synthesising biologically active and structurally challenging molecules for my PhD. See link.
2) Relaxing Stories iPhone app: The app is able to relax the user in under five minutes by listening to visually enhanced stories read by soothing voices.
3) Science communication: Writing about latest advances in chemistry for a wider audience. Also, reaching out other graduate students and institutes to get involved in communicating science.