True heroism is minutes, hours, weeks, year upon year of the quiet, precise, judicious exercise of probity and care—with no one there to see or cheer.
— David Foster Wallace, The Pale King
True heroism is minutes, hours, weeks, year upon year of the quiet, precise, judicious exercise of probity and care—with no one there to see or cheer.
— David Foster Wallace, The Pale King
Yeah, more people donated to an animal shelter than to an organization working on existential risk. Makes me feel all warm and fuzzy inside. No, wait, the opposite of that.
Sorry, I could not make sense of any of this. Especially the symbolic part, but also the conversation part. And all the other parts too.
Note that this is not just my vision of how to get published in journals. It’s my vision of how to do philosophy.
Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen’s Doing Science Online? It’s written specifically about science, but I see no reason why it couldn’t be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn’t seem to be amenable with writing journal articles (is it kosher to cite blog posts?):
Many of the best blog posts contain material that could not easily be published in a conventional way: small, striking insights, or perhaps general thoughts on approach to a problem. These are the kinds of ideas that may be too small or incomplete to be published, but which often contain the seed of later progress.
You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes equations, why not have a few thousand talented people listen in? Why not enable the most insightful to contribute their insights back?
I would much rather see SIAI form an open-access online journal or scholarly FAI/existential risks wiki or blog for the purposes of disseminating writings/thoughts on these topics. This likely would not reach as many philosophers as publishing in philosophy journals, but would almost certainly reach far more interested outsiders. Plus, philosophers have access to the internet, right?
Can’t imagine the other commenters learned programming by jumping into Scheme or Haskell, or reading SICP, or whatever it is they’re recommending :-)
Agreeing with this. I love CS theory, and I love SICP, but I learned to program by basically ignoring all that and hacking together stuff that I wanted to make. If you want to learn to program, you should probably make things first.
I think you meant to link here: http://blogs.discovermagazine.com/gnxp/2011/03/your-genes-your-rights-fdas-jeffrey-shuren-not-a-fan/
I remember reading the argument in one of the sequence articles, but I’m not sure which one. The essential idea is that any such rules just become a problem to solve for the AI, so relying on a superintelligent, recursively self-improving machine to be unable to solve a problem is not a very good idea (unless the failsafe mechanism was provably impossible to solve reliably, I suppose. But here we’re pitting human intelligence against superintelligence, and I, for one, wouldn’t bet on the humans). The more robust approach seems to be to make the AI motivated to not want to do whatever the failsafe was designed to prevent it from doing in the first place, i.e. Friendliness.
Here are the reasons to be skeptical that I picked up from that blog post:
The website of the Journal of Cosmology is ugly
The figures in the paper are “annoying”
Perhaps the claimed bacteria aren’t bacteria at all, but just squiggles.
The photos of the found bacteria aren’t at the same magnification as photos of real bacteria
It seems like the bacteria are too well-preserved for having traveled the solar system for such a long time.
Haha, maybe next they’ll find bigfoot footprints on a meteor.
I think the point is that if you’re trying to convince someone to pay you to write code for them and you have no prior experience with professional programming, a solid way to convince them that you’re hireable is contributing significant amounts of code to an open source project. This demonstrates that 1) you know how to write code, 2) that you can work with others and 3) that you’re comfortable working with a complicated codebase (depending on the project).
I’m not certain that its the most effective way to achieve this objective, but I can’t think of a better alternative. Suggestions are welcome.
Math is not necessary for many kinds of programming. Yeah, some algorithms make occasional use of graph theory, and there certainly are areas of programming that are math-heavy (3d graphics, perhaps? Also, stuff like Google’s PageRank algorithm uses linear algebra), but there are huge swaths of software development for which no (or little) math is needed. In fact, just to hammer on this point, I distinctly remember sitting in a senior-level math course and overhearing some math majors discuss how they once took an introductory programming course and found the experience confusing and unenjoyable. So yes, math and programming are quite distinct.
The probability I would place on you being able to make a living doing programming is dependent on only one factor: your willingness to spend your free time writing code. There’s plenty of people with CS degrees who don’t know how to program (and, amazingly, don’t even know how to FizzBuzz), and it’s almost certainly because they’ve never spent significant amounts of time actually building software. Programming is “how-to” knowledge, so if you can find a project that motivates you enough to gain significant experience, you should be set.
The common advice I’ve seen is to spend a few months contributing to some open source project. See this blog post, for example. (The advice in that post is hard to follow unless you already know C++ and feel like banging your head against the enormously complex Google Chrome codebase).
I’m also trying to get a programming job, but my hangup so far has been finding an open source project that I find interesting enough to contribute code to.
Offtopic, but in all seriousness if some kind of rave church existed I would attend regularly. Although I’m not certain that it’d be all that different from just going to an actual rave.
The most interesting part to me was the machine learning with groups of mutalisks to avoid AoE attacks. Watching the videos, I’m pretty sure that kind of micro could defeat any human player (I’m not sure that even Boxer could deal with that).
The thing that mystifies me with this kind of research is why people still call it “Artificial Intelligence.” Taboo those words and what are you left with, really? Essentially, using machine learning techniques and heuristics based on extensive domain knowledge to solve complicated software engineering problems. It’s impressive, but it’s also not clear that this has much of anything to do with making machines that can “use language [and] form abstractions and concepts”. Then again, this is probably a good thing given the mainstream’s apparent unwillingness to consider problems of friendliness.
Just seconding this. Still more spam today, so CAPTCHA is ineffective.
You’re right that Terminator’s depiction of AI is awful, but HAL doesn’t seem that bad at all, at least as far as mainstream depictions go.
I didn’t downvote, but it may be because the title isn’t actually true. A commenter on Hacker News posted this:
Hi Guys, I am from China, and of course the Bayesian is taught in China. Only in the culture revolution time, this crazy thing may happen; but it’s long time ago when Mao was alive. Now China is a modern society; of course, the Bayesian theory is taught in schools; otherwise how could they send the spaceship to the outer space and have the fastest super computer in the world so far.
Is there anything after death?
Unless you have good reason to believe that the brain does not completely implement consciousness (or “subjective experience” or whatever you want to call it), the notion that “you” can go on experiencing things after your brain stops working isn’t even coherent. You are your brain (or at least the information contained in it).
At least, that’s my understanding. I’m no domain expert here.
I can’t believe Hofstadter (or anyone, really) is arguing that Bem’s paper should not have been published. The paper was, presumably, published because peer-reviewers couldn’t find substantial flaws in its methodology. This speaks more about the nature of standard statistical practices in psychology, or at least about the peer-review practices at the journal in which Bem’s paper was published, which is useful information in either case.
I don’t think that it’s possible to convince most people to be more rational. It usually causes more problems than it’s worth. And as a result, I often have to make white lies or pretend to believe in things I don’t believe in, even though those things kill me inside.
This is a problem, and unfortunately my only solution is to cease (or at least minimize) the contact I have with such people. I say “unfortunately” because 100% of the people I know IRL are uninterested in rationality, so the net result is generally that I spend all of my free time on the internet.
Pungent is a web-based note-taking app that I’m working on. I made this because I had a need for something to organize personal notes, but nothing I found was satisfactory. Right now it’s essentially a less-featured clone of Workflowy, but I plan to develop it further once I figure out what direction to go in. Development is on hold for the moment while I spend some time using it and figuring out what I want it to do.
I’m also working on a research project to try to understand how human cognition works. I think FAI is really interesting + important, but I’m baffled by the decision theory approach that seems to be popular around here. Not that I have strong reasons to believe that this line of inquiry should not be pursued, but every time I think about intelligent entities purely in terms of decision theory (i.e. as entites with a “utility function” that assigns values to “states of the world”, and then takes actions which maximize said utility), I notice that I am confused.
So I’m wading through neuroscience papers at the moment. Spatial cognition and memory seem like a well-studied phenomena that are likely to act as a foundation for other cognitive abilities, and so seems like as good a place to start as any. I don’t have a website up yet for my findings, but here’s what I’ve been looking at to start:
Place cells, grid cells, and the brain’s spatial representation system—decent, recent review article for spatial cognition.
Tracking the Emergence of Conceptual Knowledge during Human Decision Making is a recent paper with some findings that seem relevant to understanding “concepts”.
The Medial Temporal Lobe is a good review of structures in the MTL, which includes structures important for spatial cognition and memory.
I’ve also been looking at Jeff Hawkins’ work on Hierarchical Temporal Memory because it is not simply another neural network model, but is actually proposed to be a model of the neocortex. Even if his views on intelligence are wrong or highly incomplete, his methodology seems sound: his work is biologically-grounded, but he doesn’t get caught up in unnecessary details.
My goal for this project is to become less confused about Friendly AI. I’d like to set up a webpage to record my progress on this project, so I’ll likely edit this post when I have a link for that.