In February 2013, IBM announced that Watson software system’s first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.[14]
How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn’t they essentially build an oracle AGI?
What heuristic should someone building a new AI use to decide whether it’s essential to talk with MIRI about it?
What heuristic should someone building a new AI use to decide whether it’s essential to talk with MIRI about it?
Why would they talk to MIRI about it at all?
They’re the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it’s far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there’s no reason to pay MIRI any attention at all, much less bring them in as consultants.
If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
If you think that MIRI ought to be involved in those decisions
As far as I understand that’s MIRI’s position that they ought to be involved when dangerous things might happen.
maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
But what goes for someone who does accept the site dogma’s in principle but still does some work in AI.
I doubt that’s enough. If someone still wants to do AI research after having heared of UFAI he needs some decision criteria to decide when it’s time to contact MIRI.
The decision criteria are easy: talk/listen to the recognized AI research experts with a proven track record. Then weigh their arguments, as well as those of MIRI. It’s the weight assignment that’s not obvious.
No, not really. If the situation is anything like that in math, physics, chemistry or computer science, unless you put in your 10k hours into it, your odds of coming up with a new idea are remote.
I don’t believe that to be true as ideas can something come from integrating knowledge of different fields.
An anthropologist that learned a new paradigma about human reasoning from studying the way some African tribe reasons about the world can reasonable bring a new idea into computer science. He will need some knowledge about computer science but no 10k hours.
When takling one problem the problem wasn’t that difficult if you had knowledge of a certain field of logic.
He solved another problem through antropology. According to him advances are often a function of having access to a particular mental tool to which no one else who tackled the problem had access.
Putting in a lot of time means that you have access to a lot of tool and know of many problems. But if you put all your time into learning the same tools that people in the field already use, you probably don’t have many mental tools that few people in a given field possess.
Paradigm changing inventions often come into fields through people who are insider/outsiders. They are enough of an insider to understand the problem but they bring expertise from another field. See “The Economy of Cities” by Jane Jacobs for more on that point.
I concede that a math expert can start usefully contributing to a math-heavy area fairly quickly. Having expertise in an unrelated area can also be useful, as a supplement, not as a substitution. I do not recall a single amateur having contributed to math or physics in the last century or so.
Do you consider the invention of the Chomsky hierarchy to lie outside the field of math? Do you think that Chomsky had 10k hours of math expertise when he wrote it down?
Regardless having less than 10k hours in a field and being an amateur are two different things.
I don’t hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.
I remember chatting with a friend who studies math and computer science. My background is bioinformatics.
If my memory is right he has working at a project that an applied mathematics group gave him because he knew something about mathematical technique XY.
He needed to find some constants that were useful for another algorithm.
He had a way to evaluate the utility of a certain value as a constant. His problem was that he had a 10 dimensional search space and didn’t really know how to search effectively in it.
In my bioinformatics classes I learned algorithms that you can use for a task like that. I’m no math expert but in that particular problem I still could provide useful input.
I would expect that there are quite a few areas where statistical tools developed within bioinformatics can be useful for people outside of it.
But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.
Do you consider the invention of the Chomsky hierarchy to lie outside the field of math?
Don’t know. Maybe a resident mathematician would chime in.
I don’t hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.
I am not aware of any. Possibly something minor, who knows.
But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.
Yes, indeed, that sounds quite plausible. Whether this something is important enough to be potentially dangerous is a question to be put to an expert in the area.
Wikipedia:
How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn’t they essentially build an oracle AGI?
What heuristic should someone building a new AI use to decide whether it’s essential to talk with MIRI about it?
No, they very much didn’t.
Why would they talk to MIRI about it at all?
They’re the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it’s far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there’s no reason to pay MIRI any attention at all, much less bring them in as consultants.
If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn’t already accept any of the site dogmas or hold EY in any particular regard.
As far as I understand that’s MIRI’s position that they ought to be involved when dangerous things might happen.
But what goes for someone who does accept the site dogma’s in principle but still does some work in AI.
I’m sorry, I didn’t get much sleep last night, but I can’t parse this sentence at all. Could you rephrase it for me?
well step one is ever having heard of MIRI or thought about UFAI in any context except that of hal or skynet
I doubt that’s enough. If someone still wants to do AI research after having heared of UFAI he needs some decision criteria to decide when it’s time to contact MIRI.
The decision criteria are easy: talk/listen to the recognized AI research experts with a proven track record. Then weigh their arguments, as well as those of MIRI. It’s the weight assignment that’s not obvious.
If you have a potentially dangerous idea then talking to recognized AI research experts might itself be dangerous.
No, not really. If the situation is anything like that in math, physics, chemistry or computer science, unless you put in your 10k hours into it, your odds of coming up with a new idea are remote.
I don’t believe that to be true as ideas can something come from integrating knowledge of different fields.
An anthropologist that learned a new paradigma about human reasoning from studying the way some African tribe reasons about the world can reasonable bring a new idea into computer science. He will need some knowledge about computer science but no 10k hours.
In http://meaningness.com/metablog/how-to-think David Chapman describes how he used AI problem by using various tools.
When takling one problem the problem wasn’t that difficult if you had knowledge of a certain field of logic. He solved another problem through antropology. According to him advances are often a function of having access to a particular mental tool to which no one else who tackled the problem had access.
Putting in a lot of time means that you have access to a lot of tool and know of many problems. But if you put all your time into learning the same tools that people in the field already use, you probably don’t have many mental tools that few people in a given field possess.
Paradigm changing inventions often come into fields through people who are insider/outsiders. They are enough of an insider to understand the problem but they bring expertise from another field. See “The Economy of Cities” by Jane Jacobs for more on that point.
I concede that a math expert can start usefully contributing to a math-heavy area fairly quickly. Having expertise in an unrelated area can also be useful, as a supplement, not as a substitution. I do not recall a single amateur having contributed to math or physics in the last century or so.
Do you consider the invention of the Chomsky hierarchy to lie outside the field of math? Do you think that Chomsky had 10k hours of math expertise when he wrote it down?
Regardless having less than 10k hours in a field and being an amateur are two different things.
I don’t hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.
I remember chatting with a friend who studies math and computer science. My background is bioinformatics. If my memory is right he has working at a project that an applied mathematics group gave him because he knew something about mathematical technique XY. He needed to find some constants that were useful for another algorithm. He had a way to evaluate the utility of a certain value as a constant. His problem was that he had a 10 dimensional search space and didn’t really know how to search effectively in it.
In my bioinformatics classes I learned algorithms that you can use for a task like that. I’m no math expert but in that particular problem I still could provide useful input.
I would expect that there are quite a few areas where statistical tools developed within bioinformatics can be useful for people outside of it.
But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.
Don’t know. Maybe a resident mathematician would chime in.
I am not aware of any. Possibly something minor, who knows.
Yes, indeed, that sounds quite plausible. Whether this something is important enough to be potentially dangerous is a question to be put to an expert in the area.