Request for Intelligence Philosophy Essay Topic Suggestions
As part of a philosophy course I’m currently taking called Intelligence in Machines, Humans, and Other Animals, I have to write a <3000w essay on a topic related to intelligence. The description is here, but I’ve copied the important details below. I figured I might as well solicit suggestions for things to research. Realistically, I am likely to optimize the essay more for passing the course than for rigour though, so if you’re expecting a very thorough review of something then you may be disappointed. But I suspect that it will still be at least an interesting jumping-off point.
Essay Topics: pick one from A, B, or C
A. Compare intelligence in machines, humans, and other animals with respect to one of the following topics. Feel free to narrow the topic down to some more specific issue, and to consider specific machines, animals, and human capacities.
You must pick a completely different topic from your first essay—I’ve kept track. For example, if you wrote on one kind of imagery, you can’t write on another kind of imagery.
- Perception
- Imagery
- Problem solving
- Learning (I did this for my first essay)
- Analogy
- Emotion
- Consciousness
- Action
- Language
- Creativity
- The self
How to narrow down the topic
After choosing one of the 11 topics, you can narrow it down to particular aspects and entitites (human, computer, animal).
For example, you could narrow perception down to sound, the computer down to SIRI, and the animal down to dogs.
Imagery could be narrowed down to visual, auditory, etc.
Learning could be narrowed down to supervised or unsupervised, or to teaching.
Analogy could be narrowed down to intelligence test type analogies (A is to B as C is to what?).
Emotion could be narrowed down to empathy.
Etc.
Edited to add: Note that these are pretty squirrellable. E.g. Last time I took “Learning” and used it to talk about (recursive) self-improvement in machines and humans (planning to post this at some point). So feel free to propose something even if you only have a vague notion of how it would fit into one of the categories
One constraint: I need to be able to ask some sort of question and then produce evidence towards either side of it, i.e. it can’t just be a review of the topic. But this too can be pretty vague; in my last essay I did “are humans or machines better suited for self-improvement?”, concluding “humans for now, ultimately machines”.
Whoever downvoted this, I would appreciate knowing why. I don’t imagine you’re going to stick around long enough to do so, but figured I’d request anyway.
It reads too much like “Help me with my homework”, and you don’t give the impression that you care about the topic or have given it any thought. Therefore it seems to offer little in terms of productive discussion.
This may be wrong but it’s the impression I got from reading the post.
That’s interesting, because to me it read more like “I’m going to write something interesting about anything you like, do some research for you, and even share the results” and “as long as I have to do this assignment I might as well make it useful to someone” but maybe that’s because I recognized the poster’s name, read his blog, etc.
I can see how someone might interpret it this way, though.
Yeah, I was definitely thinking of it more in this light: “I’m going to go do 10-15 hours of research and writing, and I’m offering anyone the chance to influence the topic with < 5 minutes of their own effort.”
Thanks for clarifying—rescinded my downvote after your edit.
My own interests in these general areas relate to the intersection between language and action… In psychology where (a) embodied cognition is all the rage, and (b) theories of action, motor planning more specifically, are being applied to language theory (work by Pickering & Garrod especially). Especially given the application of similar approaches in robotics (Cangelosi and embodiment especially coming to mind here).
Me too Malcolm.
I didn’t downvote, but this seems better suited to an open thread post than a toplevel Discussion post.
I recently re-read Gwern’s Drug Heuristics, and this jumped out at me:
If I had more time, I’d try to look more into the intelligence tests that are given to animals. Assuming animals are smarter (in some sense of the word), then why are humans dominant? I think the answer to this might be something like “Humans evolutionarily stumbled upon language, then encoded this in our genes, and language allows us to reason about the world, which is something raw animal intelligence/pattern-matching cannot do.”
I think it’s an interesting hypothesis, but I don’t know where I’d start trying to evaluate it, or how likely I think it’s true.
Huh! Yeah, that’s super interesting, but it seems like it might be hard to actually tackle. The finding of info on animal intelligence as well as the supporting of specific reasons for human dominance anyway both seem a little messy. I’ll put it on the list of options though :)
Another question I find interesting about animal consciousness I have is whether or not they can recognize cartoons. Cartoons are abstractions/analogies of the real-world. I’m curious if this abstract visual pattern recognition is possessed by animals, or if it requires human-level abstract pattern recognition. There are also some computer vision papers about classifying cartoons, and using artificially generated data-sets (since you mentioned it had to involve humans, animals, and robots).
Not sure if this will be of any use to you but… figured I’d let you decide that!
Walking upright allowed our ancestors to carry more resources greater distances. But not all our ancestors were equally “smart” at figuring out what to carry. It seems pretty straightforward that being better at calculating the best (most valuable) bundles conferred greater fitness. A group that could figure out the best combination of food and weapons to carry would have a greater chance of survival compared to any group that carried too much of one at the expensive of the other. Voila! Here we are… Homo sapiens.
Right now there’s some concern regarding the rise of super intelligent robots. In order to beat us, robots have to be better than we are at determining which bundles to carry. If a robot leaves home but forgets to take a spare battery with it… then it’s probably not going to “win”.
The process by which humans have come to be better at valuation seems pretty clear to me. Individuals that weren’t that great at valuations were removed from the gene pool. But with robots… it’s not that clear to me. How do robots become better than we are at valuation? What does that process entail? If I want to upgrade my computer then I have to buy the necessary components. How does a moderately smart robot buy the necessary components it needs to upgrade itself? Are we guessing that it has a job? Are we guessing that it robs a bank? Are we guessing that we’ll hire robots to protect banks from thieving robots?
And, if robots do become better than we are at valuation… then should we really be concerned that they will waste us? Right now we waste each other. If robots waste us too then they are no better at valuation than we are. They’ll fit right in with us. Some will end up in jail and the worst ones will end up in congress.
If robots don’t waste us, then they are better at valuation than we are.
So it’s hard for me to imagine a realistic scenario where robots 1. are better at valuation than we are and 2. waste us.
If you’re interested in “my” theory of human intelligence… What Do Coywolves, Mr. Nobody, Plants And Fungi All Have In Common?