Teaching myself C, because I will need it for my day job (I’m a software engineer, but mostly work in Perl right now) and because it will be useful for many things in the future. People in software who don’t know C are like people in business who don’t know English.
Writing a book on the Kinks
Writing enough short stories to get a complete book of them out, and also in the process trying to get them published by SF zines. These two both because I have a long-term aim of making enough money from writing that I can devote more of my time to doing interesting stuff instead of working a job.
Assisting with a political campaign in the local elections. Aim is to slowly build a saner society—I think we need fundamental changes to the political institutions in the UK, and support for the party in question will, long-term, lead to the probability of those changes increasing.
Teaching myself decision theory. I am on record as saying that I think Eliezer’s/the SIAI’s analysis of AI risks is wrong, but I think it’s a non-zero probability of being right. I think that if it does turn out to be right, the most useful thing I could do would be to contribute to some of the open decision theory problems, and that this would be useful even in the much more likely case that they’re mistaken.
Teaching myself C, because I will need it for my day job (I’m a software engineer, but mostly work in Perl right now) and because it will be useful for many things in the future. People in software who don’t know C are like people in business who don’t know English.
Writing a book on the Kinks Writing enough short stories to get a complete book of them out, and also in the process trying to get them published by SF zines. These two both because I have a long-term aim of making enough money from writing that I can devote more of my time to doing interesting stuff instead of working a job.
Assisting with a political campaign in the local elections. Aim is to slowly build a saner society—I think we need fundamental changes to the political institutions in the UK, and support for the party in question will, long-term, lead to the probability of those changes increasing.
Teaching myself decision theory. I am on record as saying that I think Eliezer’s/the SIAI’s analysis of AI risks is wrong, but I think it’s a non-zero probability of being right. I think that if it does turn out to be right, the most useful thing I could do would be to contribute to some of the open decision theory problems, and that this would be useful even in the much more likely case that they’re mistaken.