In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.
We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Agreed; I’ve had similar thoughts. Given recent popular coverage of the various things called “the Singularity”, I think we need to accept that it’s pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil’s predictions.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”
I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that’s because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it’s fair to describe SIAI as still being fundamentally about FAI (at least to anyone who’s adequately prepared to think about FAI).
Describing it as “a philosophy institute researching hugely important fundamental questions” may give people the wrong impressions, if it’s not quickly followed by more specific explanation. When people think of “philosophy” + “hugely important fundamental questions”, their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. (“Philosophy” is another term I’m inclined toward avoiding these days.) When I’ve had to describe SIAI in one phrase to people who have never heard of it, I’ve been calling it an “artificial intelligence think-tank”. Meanwhile, Michael Vassar’s Twitter describes SIAI as a “decision theory think-tank”. That’s probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where “decision theory” already refers to an interesting established field that’s relevant to AI but doesn’t share with “artificial intelligence” the connotations of missed goals, science fiction geekery, anthropomorphism, etc.
Ah, I think I can guess who you are. You work under a professor called Josh and have an umlaut in your surname. Shame that the others in that great research group don’t take you seriously.
In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.
We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”
Agreed; I’ve had similar thoughts. Given recent popular coverage of the various things called “the Singularity”, I think we need to accept that it’s pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil’s predictions.
I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that’s because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it’s fair to describe SIAI as still being fundamentally about FAI (at least to anyone who’s adequately prepared to think about FAI).
Describing it as “a philosophy institute researching hugely important fundamental questions” may give people the wrong impressions, if it’s not quickly followed by more specific explanation. When people think of “philosophy” + “hugely important fundamental questions”, their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. (“Philosophy” is another term I’m inclined toward avoiding these days.) When I’ve had to describe SIAI in one phrase to people who have never heard of it, I’ve been calling it an “artificial intelligence think-tank”. Meanwhile, Michael Vassar’s Twitter describes SIAI as a “decision theory think-tank”. That’s probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where “decision theory” already refers to an interesting established field that’s relevant to AI but doesn’t share with “artificial intelligence” the connotations of missed goals, science fiction geekery, anthropomorphism, etc.
Ah, I think I can guess who you are. You work under a professor called Josh and have an umlaut in your surname. Shame that the others in that great research group don’t take you seriously.